Using known populations of pronghorn to evaluate sampling plans and estimators
Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.
1995-01-01
Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.
Designing a two-rank acceptance sampling plan for quality inspection of geospatial data products
NASA Astrophysics Data System (ADS)
Tong, Xiaohua; Wang, Zhenhua; Xie, Huan; Liang, Dan; Jiang, Zuoqin; Li, Jinchao; Li, Jun
2011-10-01
To address the disadvantages of classical sampling plans designed for traditional industrial products, we originally propose a two-rank acceptance sampling plan (TRASP) for the inspection of geospatial data outputs based on the acceptance quality level (AQL). The first rank sampling plan is to inspect the lot consisting of map sheets, and the second is to inspect the lot consisting of features in an individual map sheet. The TRASP design is formulated as an optimization problem with respect to sample size and acceptance number, which covers two lot size cases. The first case is for a small lot size with nonconformities being modeled by a hypergeometric distribution function, and the second is for a larger lot size with nonconformities being modeled by a Poisson distribution function. The proposed TRASP is illustrated through two empirical case studies. Our analysis demonstrates that: (1) the proposed TRASP provides a general approach for quality inspection of geospatial data outputs consisting of non-uniform items and (2) the proposed acceptance sampling plan based on TRASP performs better than other classical sampling plans. It overcomes the drawbacks of percent sampling, i.e., "strictness for large lot size, toleration for small lot size," and those of a national standard used specifically for industrial outputs, i.e., "lots with different sizes corresponding to the same sampling plan."
HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE
NASA Technical Reports Server (NTRS)
De, Salvo L. J.
1994-01-01
HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.
Hamilton, A J; Waters, E K; Kim, H J; Pak, W S; Furlong, M J
2009-06-01
The combined action of two lepidoteran pests, Plutella xylostella L. (Plutellidae) and Pieris rapae L. (Pieridae),causes significant yield losses in cabbage (Brassica oleracea variety capitata) crops in the Democratic People's Republic of Korea. Integrated pest management (IPM) strategies for these cropping systems are in their infancy, and sampling plans have not yet been developed. We used statistical resampling to assess the performance of fixed sample size plans (ranging from 10 to 50 plants). First, the precision (D = SE/mean) of the plans in estimating the population mean was assessed. There was substantial variation in achieved D for all sample sizes, and sample sizes of at least 20 and 45 plants were required to achieve the acceptable precision level of D < or = 0.3 at least 50 and 75% of the time, respectively. Second, the performance of the plans in classifying the population density relative to an economic threshold (ET) was assessed. To account for the different damage potentials of the two species the ETs were defined in terms of standard insects (SIs), where 1 SI = 1 P. rapae = 5 P. xylostella larvae. The plans were implemented using different economic thresholds (ETs) for the three growth stages of the crop: precupping (1 SI/plant), cupping (0.5 SI/plant), and heading (4 SI/plant). Improvement in the classification certainty with increasing sample sizes could be seen through the increasing steepness of operating characteristic curves. Rather than prescribe a particular plan, we suggest that the results of these analyses be used to inform practitioners of the relative merits of the different sample sizes.
Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu
2015-07-01
Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.
Shah, R; Worner, S P; Chapman, R B
2012-10-01
Pesticide resistance monitoring includes resistance detection and subsequent documentation/ measurement. Resistance detection would require at least one (≥1) resistant individual(s) to be present in a sample to initiate management strategies. Resistance documentation, on the other hand, would attempt to get an estimate of the entire population (≥90%) of the resistant individuals. A computer simulation model was used to compare the efficiency of simple random and systematic sampling plans to detect resistant individuals and to document their frequencies when the resistant individuals were randomly or patchily distributed. A patchy dispersion pattern of resistant individuals influenced the sampling efficiency of systematic sampling plans while the efficiency of random sampling was independent of such patchiness. When resistant individuals were randomly distributed, sample sizes required to detect at least one resistant individual (resistance detection) with a probability of 0.95 were 300 (1%) and 50 (10% and 20%); whereas, when resistant individuals were patchily distributed, using systematic sampling, sample sizes required for such detection were 6000 (1%), 600 (10%) and 300 (20%). Sample sizes of 900 and 400 would be required to detect ≥90% of resistant individuals (resistance documentation) with a probability of 0.95 when resistant individuals were randomly dispersed and present at a frequency of 10% and 20%, respectively; whereas, when resistant individuals were patchily distributed, using systematic sampling, a sample size of 3000 and 1500, respectively, was necessary. Small sample sizes either underestimated or overestimated the resistance frequency. A simple random sampling plan is, therefore, recommended for insecticide resistance detection and subsequent documentation.
Serra, Gerardo V.; Porta, Norma C. La; Avalos, Susana; Mazzuferi, Vilma
2013-01-01
The alfalfa caterpillar, Colias lesbia (Fabricius) (Lepidoptera: Pieridae), is a major pest of alfalfa, Medicago sativa L. (Fabales: Fabaceae), crops in Argentina. Its management is based mainly on chemical control of larvae whenever the larvae exceed the action threshold. To develop and validate fixed-precision sequential sampling plans, an intensive sampling programme for C. lesbia eggs was carried out in two alfalfa plots located in the Province of Córdoba, Argentina, from 1999 to 2002. Using Resampling for Validation of Sampling Plans software, 12 additional independent data sets were used to validate the sequential sampling plan with precision levels of 0.10 and 0.25 (SE/mean), respectively. For a range of mean densities of 0.10 to 8.35 eggs/sample, an average sample size of only 27 and 26 sample units was required to achieve a desired precision level of 0.25 for the sampling plans of Green and Kuno, respectively. As the precision level was increased to 0.10, average sample size increased to 161 and 157 sample units for the sampling plans of Green and Kuno, respectively. We recommend using Green's sequential sampling plan because it is less sensitive to changes in egg density. These sampling plans are a valuable tool for researchers to study population dynamics and to evaluate integrated pest management strategies. PMID:23909840
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2011-01-01
Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…
Effect of Sampling Plans on the Risk of Escherichia coli O157 Illness.
Kiermeier, Andreas; Sumner, John; Jenson, Ian
2015-07-01
Australia exports about 150,000 to 200,000 tons of manufacturing beef to the United States annually. Each lot is tested for Escherichia coli O157 using the N-60 sampling protocol, where 60 small pieces of surface meat from each lot of production are tested. A risk assessment of E. coli O157 illness from the consumption of hamburgers made from Australian manufacturing meat formed the basis to evaluate the effect of sample size and amount on the number of illnesses predicted. The sampling plans evaluated included no sampling (resulting in an estimated 55.2 illnesses per annum), the current N-60 plan (50.2 illnesses), N-90 (49.6 illnesses), N-120 (48.4 illnesses), and a more stringent N-60 sampling plan taking five 25-g samples from each of 12 cartons (47.4 illnesses per annum). While sampling may detect some highly contaminated lots, it does not guarantee that all such lots are removed from commerce. It is concluded that increasing the sample size or sample amount from the current N-60 plan would have a very small public health effect.
Designing a multiple dependent state sampling plan based on the coefficient of variation.
Yan, Aijun; Liu, Sanyang; Dong, Xiaojuan
2016-01-01
A multiple dependent state (MDS) sampling plan is developed based on the coefficient of variation of the quality characteristic which follows a normal distribution with unknown mean and variance. The optimal plan parameters of the proposed plan are solved by a nonlinear optimization model, which satisfies the given producer's risk and consumer's risk at the same time and minimizes the sample size required for inspection. The advantages of the proposed MDS sampling plan over the existing single sampling plan are discussed. Finally an example is given to illustrate the proposed plan.
Planning Community-Based Assessments of HIV Educational Intervention Programs in Sub-Saharan Africa
ERIC Educational Resources Information Center
Kelcey, Ben; Shen, Zuchao
2017-01-01
A key consideration in planning studies of community-based HIV education programs is identifying a sample size large enough to ensure a reasonable probability of detecting program effects if they exist. Sufficient sample sizes for community- or group-based designs are proportional to the correlation or similarity of individuals within communities.…
Cocco, Arturo; Serra, Giuseppe; Lentini, Andrea; Deliperi, Salvatore; Delrio, Gavino
2015-09-01
The within- and between-plant distribution of the tomato leafminer, Tuta absoluta (Meyrick), was investigated in order to define action thresholds based on leaf infestation and to propose enumerative and binomial sequential sampling plans for pest management applications in protected crops. The pest spatial distribution was aggregated between plants, and median leaves were the most suitable sample to evaluate the pest density. Action thresholds of 36 and 48%, 43 and 56% and 60 and 73% infested leaves, corresponding to economic thresholds of 1 and 3% damaged fruits, were defined for tomato cultivars with big, medium and small fruits respectively. Green's method was a more suitable enumerative sampling plan as it required a lower sampling effort. Binomial sampling plans needed lower average sample sizes than enumerative plans to make a treatment decision, with probabilities of error of <0.10. The enumerative sampling plan required 87 or 343 leaves to estimate the population density in extensive or intensive ecological studies respectively. Binomial plans would be more practical and efficient for control purposes, needing average sample sizes of 17, 20 and 14 leaves to take a pest management decision in order to avoid fruit damage higher than 1% in cultivars with big, medium and small fruits respectively. © 2014 Society of Chemical Industry.
Alternative sample sizes for verification dose experiments and dose audits
NASA Astrophysics Data System (ADS)
Taylor, W. A.; Hansen, J. M.
1999-01-01
ISO 11137 (1995), "Sterilization of Health Care Products—Requirements for Validation and Routine Control—Radiation Sterilization", provides sampling plans for performing initial verification dose experiments and quarterly dose audits. Alternative sampling plans are presented which provide equivalent protection. These sampling plans can significantly reduce the cost of testing. These alternative sampling plans have been included in a draft ISO Technical Report (type 2). This paper examines the rational behind the proposed alternative sampling plans. The protection provided by the current verification and audit sampling plans is first examined. Then methods for identifying equivalent plans are highlighted. Finally, methods for comparing the cost associated with the different plans are provided. This paper includes additional guidance for selecting between the original and alternative sampling plans not included in the technical report.
Accounting for twin births in sample size calculations for randomised trials.
Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J
2018-05-04
Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Pries, V. V.; Proskuriakov, N. E.
2018-04-01
To control the assembly quality of multi-element mass-produced products on automatic rotor lines, control methods with operational feedback are required. However, due to possible failures in the operation of the devices and systems of automatic rotor line, there is always a real probability of getting defective (incomplete) products into the output process stream. Therefore, a continuous sampling control of the products completeness, based on the use of statistical methods, remains an important element in managing the quality of assembly of multi-element mass products on automatic rotor lines. The feature of continuous sampling control of the multi-element products completeness in the assembly process is its breaking sort, which excludes the possibility of returning component parts after sampling control to the process stream and leads to a decrease in the actual productivity of the assembly equipment. Therefore, the use of statistical procedures for continuous sampling control of the multi-element products completeness when assembled on automatic rotor lines requires the use of such sampling plans that ensure a minimum size of control samples. Comparison of the values of the limit of the average output defect level for the continuous sampling plan (CSP) and for the automated continuous sampling plan (ACSP) shows the possibility of providing lower limit values for the average output defects level using the ACSP-1. Also, the average sample size when using the ACSP-1 plan is less than when using the CSP-1 plan. Thus, the application of statistical methods in the assembly quality management of multi-element products on automatic rotor lines, involving the use of proposed plans and methods for continuous selective control, will allow to automating sampling control procedures and the required level of quality of assembled products while minimizing sample size.
Sample size determination for mediation analysis of longitudinal data.
Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying
2018-03-27
Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.
Candel, Math J J M; Van Breukelen, Gerard J P
2010-06-30
Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Spybrook, Jessaca; Puente, Anne Cullen; Lininger, Monica
2013-01-01
This article examines changes in the research design, sample size, and precision between the planning phase and implementation phase of group randomized trials (GRTs) funded by the Institute of Education Sciences. Thirty-eight GRTs funded between 2002 and 2006 were examined. Three studies revealed changes in the experimental design. Ten studies…
Costa, Marilia G; Barbosa, José C; Yamamoto, Pedro T
2007-01-01
The sequential sampling is characterized by using samples of variable sizes, and has the advantage of reducing sampling time and costs if compared to fixed-size sampling. To introduce an adequate management for orthezia, sequential sampling plans were developed for orchards under low and high infestation. Data were collected in Matão, SP, in commercial stands of the orange variety 'Pêra Rio', at five, nine and 15 years of age. Twenty samplings were performed in the whole area of each stand by observing the presence or absence of scales on plants, being plots comprised of ten plants. After observing that in all of the three stands the scale population was distributed according to the contagious model, fitting the Negative Binomial Distribution in most samplings, two sequential sampling plans were constructed according to the Sequential Likelihood Ratio Test (SLRT). To construct these plans an economic threshold of 2% was adopted and the type I and II error probabilities were fixed in alpha = beta = 0.10. Results showed that the maximum numbers of samples expected to determine control need were 172 and 76 samples for stands with low and high infestation, respectively.
Development of sampling plans for cotton bolls injured by stink bugs (Hemiptera: Pentatomidae).
Reay-Jones, F P F; Toews, M D; Greene, J K; Reeves, R B
2010-04-01
Cotton, Gossypium hirsutum L., bolls were sampled in commercial fields for stink bug (Hemiptera: Pentatomidae) injury during 2007 and 2008 in South Carolina and Georgia. Across both years of this study, boll-injury percentages averaged 14.8 +/- 0.3 (SEM). At average boll injury treatment levels of 10, 20, 30, and 50%, the percentage of samples with at least one injured boll was 82, 97, 100, and 100%, respectively. Percentage of field-sampling date combinations with average injury < 10, 20, 30, and 50% was 35, 80, 95, and 99%, respectively. At the average of 14.8% boll injury or 2.9 injured bolls per 20-boll sample, 112 samples at Dx = 0.1 (within 10% of the mean) were required for population estimation, compared with only 15 samples at Dx = 0.3. Using a sample size of 20 bolls, our study indicated that, at the 10% threshold and alpha = beta = 0.2 (with 80% confidence), control was not needed when <1.03 bolls were injured. The sampling plan required continued sampling for a range of 1.03-3.8 injured bolls per 20-boll sample. Only when injury was > 3.8 injured bolls per 20-boll sample was a control measure needed. Sequential sampling plans were also determined for thresholds of 20, 30, and 50% injured bolls. Sample sizes for sequential sampling plans were significantly reduced when compared with a fixed sampling plan (n=10) for all thresholds and error rates.
Parajulee, M N; Shrestha, R B; Leser, J F
2006-04-01
A 2-yr field study was conducted to examine the effectiveness of two sampling methods (visual and plant washing techniques) for western flower thrips, Frankliniella occidentalis (Pergande), and five sampling methods (visual, beat bucket, drop cloth, sweep net, and vacuum) for cotton fleahopper, Pseudatomoscelis seriatus (Reuter), in Texas cotton, Gossypium hirsutum (L.), and to develop sequential sampling plans for each pest. The plant washing technique gave similar results to the visual method in detecting adult thrips, but the washing technique detected significantly higher number of thrips larvae compared with the visual sampling. Visual sampling detected the highest number of fleahoppers followed by beat bucket, drop cloth, vacuum, and sweep net sampling, with no significant difference in catch efficiency between vacuum and sweep net methods. However, based on fixed precision cost reliability, the sweep net sampling was the most cost-effective method followed by vacuum, beat bucket, drop cloth, and visual sampling. Taylor's Power Law analysis revealed that the field dispersion patterns of both thrips and fleahoppers were aggregated throughout the crop growing season. For thrips management decision based on visual sampling (0.25 precision), 15 plants were estimated to be the minimum sample size when the estimated population density was one thrips per plant, whereas the minimum sample size was nine plants when thrips density approached 10 thrips per plant. The minimum visual sample size for cotton fleahoppers was 16 plants when the density was one fleahopper per plant, but the sample size decreased rapidly with an increase in fleahopper density, requiring only four plants to be sampled when the density was 10 fleahoppers per plant. Sequential sampling plans were developed and validated with independent data for both thrips and cotton fleahoppers.
Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach
NASA Technical Reports Server (NTRS)
Hixson, M.; Bauer, M. E.; Davis, B. J. (Principal Investigator)
1979-01-01
The author has identified the following significant results. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Evaluation of four sampling schemes involving different numbers of samples and different size sampling units shows that the precision of the wheat estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling size unit.
Mayer, B; Muche, R
2013-01-01
Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.
Shahbi, M; Rajabpour, A
2017-08-01
Phthorimaea operculella Zeller is an important pest of potato in Iran. Spatial distribution and fixed-precision sequential sampling for population estimation of the pest on two potato cultivars, Arinda ® and Sante ® , were studied in two separate potato fields during two growing seasons (2013-2014 and 2014-2015). Spatial distribution was investigated by Taylor's power law and Iwao's patchiness. Results showed that the spatial distribution of eggs and larvae was random. In contrast to Iwao's patchiness, Taylor's power law provided a highly significant relationship between variance and mean density. Therefore, fixed-precision sequential sampling plan was developed by Green's model at two precision levels of 0.25 and 0.1. The optimum sample size on Arinda ® and Sante ® cultivars at precision level of 0.25 ranged from 151 to 813 and 149 to 802 leaves, respectively. At 0.1 precision level, the sample sizes varied from 5083 to 1054 and 5100 to 1050 leaves for Arinda ® and Sante ® cultivars, respectively. Therefore, the optimum sample sizes for the cultivars, with different resistance levels, were not significantly different. According to the calculated stop lines, the sampling must be continued until cumulative number of eggs + larvae reached to 15-16 or 96-101 individuals at precision levels of 0.25 or 0.1, respectively. The performance of the sampling plan was validated by resampling analysis using resampling for validation of sampling plans software. The sampling plant provided in this study can be used to obtain a rapid estimate of the pest density with minimal effort.
Martínez-Ferrer, María Teresa; Ripollés, José Luís; Garcia-Marí, Ferran
2006-06-01
The spatial distribution of the citrus mealybug, Planococcus citri (Risso) (Homoptera: Pseudococcidae), was studied in citrus groves in northeastern Spain. Constant precision sampling plans were designed for all developmental stages of citrus mealybug under the fruit calyx, for late stages on fruit, and for females on trunks and main branches; more than 66, 286, and 101 data sets, respectively, were collected from nine commercial fields during 1992-1998. Dispersion parameters were determined using Taylor's power law, giving aggregated spatial patterns for citrus mealybug populations in three locations of the tree sampled. A significant relationship between the number of insects per organ and the percentage of occupied organs was established using either Wilson and Room's binomial model or Kono and Sugino's empirical formula. Constant precision (E = 0.25) sampling plans (i.e., enumerative plans) for estimating mean densities were developed using Green's equation and the two binomial models. For making management decisions, enumerative counts may be less labor-intensive than binomial sampling. Therefore, we recommend enumerative sampling plans for the use in an integrated pest management program in citrus. Required sample sizes for the range of population densities near current management thresholds, in the three plant locations calyx, fruit, and trunk were 50, 110-330, and 30, respectively. Binomial sampling, especially the empirical model, required a higher sample size to achieve equivalent levels of precision.
Ifoulis, A A; Savopoulou-Soultani, M
2006-10-01
The purpose of this research was to quantify the spatial pattern and develop a sampling program for larvae of Lobesia botrana Denis and Schiffermüller (Lepidoptera: Tortricidae), an important vineyard pest in northern Greece. Taylor's power law and Iwao's patchiness regression were used to model the relationship between the mean and the variance of larval counts. Analysis of covariance was carried out, separately for infestation and injury, with combined second and third generation data, for vine and half-vine sample units. Common regression coefficients were estimated to permit use of the sampling plan over a wide range of conditions. Optimum sample sizes for infestation and injury, at three levels of precision, were developed. An investigation of a multistage sampling plan with a nested analysis of variance showed that if the goal of sampling is focusing on larval infestation, three grape clusters should be sampled in a half-vine; if the goal of sampling is focusing on injury, then two grape clusters per half-vine are recommended.
Graf, Alexandra C; Bauer, Peter
2011-06-30
We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.
Baldissera, Sandro; Ferrante, Gianluigi; Quarchioni, Elisa; Minardi, Valentina; Possenti, Valentina; Carrozzi, Giuliano; Masocco, Maria; Salmaso, Stefania
2014-04-01
Field substitution of nonrespondents can be used to maintain the planned sample size and structure in surveys but may introduce additional bias. Sample weighting is suggested as the preferable alternative; however, limited empirical evidence exists comparing the two methods. We wanted to assess the impact of substitution on surveillance results using data from Progressi delle Aziende Sanitarie per la Salute in Italia-Progress by Local Health Units towards a Healthier Italy (PASSI). PASSI is conducted by Local Health Units (LHUs) through telephone interviews of stratified random samples of residents. Nonrespondents are replaced with substitutes randomly preselected in the same LHU stratum. We compared the weighted estimates obtained in the original PASSI sample (used as a reference) and in the substitutes' sample. The differences were evaluated using a Wald test. In 2011, 50,697 units were selected: 37,252 were from the original sample and 13,445 were substitutes; 37,162 persons were interviewed. The initially planned size and demographic composition were restored. No significant differences in the estimates between the original and the substitutes' sample were found. In our experience, field substitution is an acceptable method for dealing with nonresponse, maintaining the characteristics of the original sample without affecting the results. This evidence can support appropriate decisions about planning and implementing a surveillance system. Copyright © 2014 Elsevier Inc. All rights reserved.
10 CFR Appendix B to Subpart F of... - Sampling Plan For Enforcement Testing
Code of Federal Regulations, 2010 CFR
2010-01-01
... sample as follows: ER18MR98.010 where (x 1) is the measured energy efficiency, energy or water (in the...-tailed probability level and a sample size of n 1. Step 6(a). For an Energy Efficiency Standard, compare... an Energy Efficiency Standard, determine the second sample size (n 2) as follows: ER18MR98.015 where...
Anderson, Samantha F; Maxwell, Scott E
2017-01-01
Psychology is undergoing a replication crisis. The discussion surrounding this crisis has centered on mistrust of previous findings. Researchers planning replication studies often use the original study sample effect size as the basis for sample size planning. However, this strategy ignores uncertainty and publication bias in estimated effect sizes, resulting in overly optimistic calculations. A psychologist who intends to obtain power of .80 in the replication study, and performs calculations accordingly, may have an actual power lower than .80. We performed simulations to reveal the magnitude of the difference between actual and intended power based on common sample size planning strategies and assessed the performance of methods that aim to correct for effect size uncertainty and/or bias. Our results imply that even if original studies reflect actual phenomena and were conducted in the absence of questionable research practices, popular approaches to designing replication studies may result in a low success rate, especially if the original study is underpowered. Methods correcting for bias and/or uncertainty generally had higher actual power, but were not a panacea for an underpowered original study. Thus, it becomes imperative that 1) original studies are adequately powered and 2) replication studies are designed with methods that are more likely to yield the intended level of power.
Sample size calculation for a proof of concept study.
Yin, Yin
2002-05-01
Sample size calculation is vital for a confirmatory clinical trial since the regulatory agencies require the probability of making Type I error to be significantly small, usually less than 0.05 or 0.025. However, the importance of the sample size calculation for studies conducted by a pharmaceutical company for internal decision making, e.g., a proof of concept (PoC) study, has not received enough attention. This article introduces a Bayesian method that identifies the information required for planning a PoC and the process of sample size calculation. The results will be presented in terms of the relationships between the regulatory requirements, the probability of reaching the regulatory requirements, the goalpost for PoC, and the sample size used for PoC.
Using Monte Carlo Simulations to Determine Power and Sample Size for Planned Missing Designs
ERIC Educational Resources Information Center
Schoemann, Alexander M.; Miller, Patrick; Pornprasertmanit, Sunthud; Wu, Wei
2014-01-01
Planned missing data designs allow researchers to increase the amount and quality of data collected in a single study. Unfortunately, the effect of planned missing data designs on power is not straightforward. Under certain conditions using a planned missing design will increase power, whereas in other situations using a planned missing design…
Emperical Tests of Acceptance Sampling Plans
NASA Technical Reports Server (NTRS)
White, K. Preston, Jr.; Johnson, Kenneth L.
2012-01-01
Acceptance sampling is a quality control procedure applied as an alternative to 100% inspection. A random sample of items is drawn from a lot to determine the fraction of items which have a required quality characteristic. Both the number of items to be inspected and the criterion for determining conformance of the lot to the requirement are given by an appropriate sampling plan with specified risks of Type I and Type II sampling errors. In this paper, we present the results of empirical tests of the accuracy of selected sampling plans reported in the literature. These plans are for measureable quality characteristics which are known have either binomial, exponential, normal, gamma, Weibull, inverse Gaussian, or Poisson distributions. In the main, results support the accepted wisdom that variables acceptance plans are superior to attributes (binomial) acceptance plans, in the sense that these provide comparable protection against risks at reduced sampling cost. For the Gaussian and Weibull plans, however, there are ranges of the shape parameters for which the required sample sizes are in fact larger than the corresponding attributes plans, dramatically so for instances of large skew. Tests further confirm that the published inverse-Gaussian (IG) plan is flawed, as reported by White and Johnson (2011).
Lessio, Federico; Alma, Alberto
2006-04-01
The spatial distribution of the nymphs of Scaphoideus titanus Ball (Homoptera Cicadellidae), the vector of grapevine flavescence dorée (Candidatus Phytoplasma vitis, 16Sr-V), was studied by applying Taylor's power law. Studies were conducted from 2002 to 2005, in organic and conventional vineyards of Piedmont, northern Italy. Minimum sample size and fixed precision level stop lines were calculated to develop appropriate sampling plans. Model validation was performed, using independent field data, by means of Resampling Validation of Sample Plans (RVSP) resampling software. The nymphal distribution, analyzed via Taylor's power law, was aggregated, with b = 1.49. A sample of 32 plants was adequate at low pest densities with a precision level of D0 = 0.30; but for a more accurate estimate (D0 = 0.10), the required sample size needs to be 292 plants. Green's fixed precision level stop lines seem to be more suitable for field sampling: RVSP simulations of this sampling plan showed precision levels very close to the desired levels. However, at a prefixed precision level of 0.10, sampling would become too time-consuming, whereas a precision level of 0.25 is easily achievable. How these results could influence the correct application of the compulsory control of S. titanus and Flavescence dorée in Italy is discussed.
Guo, Jiin-Huarng; Luh, Wei-Ming
2009-05-01
When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.
Terry, Leann; Kelley, Ken
2012-11-01
Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.
Lai, Keke; Kelley, Ken
2011-06-01
In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about the magnitude of the population targeted effects. With the goal of obtaining sufficiently narrow confidence intervals for the model parameters of interest, sample size planning methods for SEM are developed from the accuracy in parameter estimation approach. One method plans for the sample size so that the expected confidence interval width is sufficiently narrow. An extended procedure ensures that the obtained confidence interval will be no wider than desired, with some specified degree of assurance. A Monte Carlo simulation study was conducted that verified the effectiveness of the procedures in realistic situations. The methods developed have been implemented in the MBESS package in R so that they can be easily applied by researchers. © 2011 American Psychological Association
Internal pilots for a class of linear mixed models with Gaussian and compound symmetric data
Gurka, Matthew J.; Coffey, Christopher S.; Muller, Keith E.
2015-01-01
SUMMARY An internal pilot design uses interim sample size analysis, without interim data analysis, to adjust the final number of observations. The approach helps to choose a sample size sufficiently large (to achieve the statistical power desired), but not too large (which would waste money and time). We report on recent research in cerebral vascular tortuosity (curvature in three dimensions) which would benefit greatly from internal pilots due to uncertainty in the parameters of the covariance matrix used for study planning. Unfortunately, observations correlated across the four regions of the brain and small sample sizes preclude using existing methods. However, as in a wide range of medical imaging studies, tortuosity data have no missing or mistimed data, a factorial within-subject design, the same between-subject design for all responses, and a Gaussian distribution with compound symmetry. For such restricted models, we extend exact, small sample univariate methods for internal pilots to linear mixed models with any between-subject design (not just two groups). Planning a new tortuosity study illustrates how the new methods help to avoid sample sizes that are too small or too large while still controlling the type I error rate. PMID:17318914
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Heidel, R Eric
2016-01-01
Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.
Sample Size in Qualitative Interview Studies: Guided by Information Power.
Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit
2015-11-27
Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is "saturation." Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the concept "information power" to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning and during data collection of a qualitative study is discussed. © The Author(s) 2015.
Lee, K V; Moon, R D; Burkness, E C; Hutchison, W D; Spivak, M
2010-08-01
The parasitic mite Varroa destructor Anderson & Trueman (Acari: Varroidae) is arguably the most detrimental pest of the European-derived honey bee, Apis mellifera L. Unfortunately, beekeepers lack a standardized sampling plan to make informed treatment decisions. Based on data from 31 commercial apiaries, we developed sampling plans for use by beekeepers and researchers to estimate the density of mites in individual colonies or whole apiaries. Beekeepers can estimate a colony's mite density with chosen level of precision by dislodging mites from approximately to 300 adult bees taken from one brood box frame in the colony, and they can extrapolate to mite density on a colony's adults and pupae combined by doubling the number of mites on adults. For sampling whole apiaries, beekeepers can repeat the process in each of n = 8 colonies, regardless of apiary size. Researchers desiring greater precision can estimate mite density in an individual colony by examining three, 300-bee sample units. Extrapolation to density on adults and pupae may require independent estimates of numbers of adults, of pupae, and of their respective mite densities. Researchers can estimate apiary-level mite density by taking one 300-bee sample unit per colony, but should do so from a variable number of colonies, depending on apiary size. These practical sampling plans will allow beekeepers and researchers to quantify mite infestation levels and enhance understanding and management of V. destructor.
A note on sample size calculation for mean comparisons based on noncentral t-statistics.
Chow, Shein-Chung; Shao, Jun; Wang, Hansheng
2002-11-01
One-sample and two-sample t-tests are commonly used in analyzing data from clinical trials in comparing mean responses from two drug products. During the planning stage of a clinical study, a crucial step is the sample size calculation, i.e., the determination of the number of subjects (patients) needed to achieve a desired power (e.g., 80%) for detecting a clinically meaningful difference in the mean drug responses. Based on noncentral t-distributions, we derive some sample size calculation formulas for testing equality, testing therapeutic noninferiority/superiority, and testing therapeutic equivalence, under the popular one-sample design, two-sample parallel design, and two-sample crossover design. Useful tables are constructed and some examples are given for illustration.
Planning Skills in Autism Spectrum Disorder across the Lifespan: A Meta-Analysis and Meta-Regression
ERIC Educational Resources Information Center
Olde Dubbelink, Linda M. E.; Geurts, Hilde M.
2017-01-01
Individuals with an autism spectrum disorder (ASD) are thought to encounter planning difficulties, but experimental research regarding the mastery of planning in ASD is inconsistent. By means of a meta-analysis of 50 planning studies with a combined sample size of 1755 individuals with and 1642 without ASD, we aim to determine whether planning…
Planned Missing Data Designs with Small Sample Sizes: How Small Is Too Small?
ERIC Educational Resources Information Center
Jia, Fan; Moore, E. Whitney G.; Kinai, Richard; Crowe, Kelly S.; Schoemann, Alexander M.; Little, Todd D.
2014-01-01
Utilizing planned missing data (PMD) designs (ex. 3-form surveys) enables researchers to ask participants fewer questions during the data collection process. An important question, however, is just how few participants are needed to effectively employ planned missing data designs in research studies. This article explores this question by using…
Nomogram for sample size calculation on a straightforward basis for the kappa statistic.
Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo
2014-09-01
Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.
Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit
2013-01-01
Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.
Using known map category marginal frequencies to improve estimates of thematic map accuracy
NASA Technical Reports Server (NTRS)
Card, D. H.
1982-01-01
By means of two simple sampling plans suggested in the accuracy-assessment literature, it is shown how one can use knowledge of map-category relative sizes to improve estimates of various probabilities. The fact that maximum likelihood estimates of cell probabilities for the simple random sampling and map category-stratified sampling were identical has permitted a unified treatment of the contingency-table analysis. A rigorous analysis of the effect of sampling independently within map categories is made possible by results for the stratified case. It is noted that such matters as optimal sample size selection for the achievement of a desired level of precision in various estimators are irrelevant, since the estimators derived are valid irrespective of how sample sizes are chosen.
Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.
Rochon, K; Scoles, G A; Lysyk, T J
2012-03-01
A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.
Frequency of Bolton tooth-size discrepancies among orthodontic patients.
Freeman, J E; Maskeroni, A J; Lorton, L
1996-07-01
The purpose of this study was to determine the percentage of orthodontic patients who present with an interarch tooth-size discrepancy likely to affect treatment planning or results. The Bolton tooth-size discrepancies of 157 patients accepted for treatment in an orthodontic residency program were evaluated for the frequency and the magnitude of deviation from Bolton's mean. Discrepancies outside of 2 SD were considered as potentially significant with regard to treatment planning and treatment results. Although the mean of the sample was nearly identical to that of Bolton's, the range and standard deviation varied considerably with a large percentage of the orthodontic patients having discrepancies outside of Bolton's 2 SD. With such a high frequency of significant discrepancies it would seem prudent to routinely perform a tooth-size analysis and incorporate the findings into orthodontic treatment planning.
The Statistical Power of Planned Comparisons.
ERIC Educational Resources Information Center
Benton, Roberta L.
Basic principles underlying statistical power are examined; and issues pertaining to effect size, sample size, error variance, and significance level are highlighted via the use of specific hypothetical examples. Analysis of variance (ANOVA) and related methods remain popular, although other procedures sometimes have more statistical power against…
Brand, Christopher J.
2009-01-01
Executive Summary: This Surveillance Plan (Plan) describes plans for conducting surveillance of wild birds in the United States and its Territories and Freely-Associated States to provide for early detection of the introduction of the H5N1 Highly Pathogenic Avian Influenza (HPAI) subtype of the influenza A virus by migratory birds during the 2009 surveillance year, spanning the period of April 1, 2009 - March 31, 2010. The Plan represents a continuation of surveillance efforts begun in 2006 under the Interagency Strategic Plan for the Early Detection of H5N1 Highly Pathogenic Avian Influenza in Wild Migratory Birds (U.S. Department of Agriculture and U.S. Department of the Interior, 2006). The Plan sets forth sampling plans by: region, target species or species groups to be sampled, locations of sampling, sample sizes, and sampling approaches and methods. This Plan will be reviewed annually and modified as appropriate for subsequent surveillance years based on evaluation of information from previous years of surveillance, changing patterns and threats of H5N1 HPAI, and changes in funding availability for avian influenza surveillance. Specific sampling strategies will be developed accordingly within each of six regions, defined here as Alaska, Hawaiian/Pacific Islands, Lower Pacific Flyway (Washington, Oregon, California, Idaho, Nevada, Arizona), Central Flyway, Mississippi Flyway, and Atlantic Flyway.
Reporting of sample size calculations in analgesic clinical trials: ACTTION systematic review.
McKeown, Andrew; Gewandter, Jennifer S; McDermott, Michael P; Pawlowski, Joseph R; Poli, Joseph J; Rothstein, Daniel; Farrar, John T; Gilron, Ian; Katz, Nathaniel P; Lin, Allison H; Rappaport, Bob A; Rowbotham, Michael C; Turk, Dennis C; Dworkin, Robert H; Smith, Shannon M
2015-03-01
Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size. Copyright © 2015 American Pain Society. All rights reserved.
Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F
2014-07-10
In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.
7 CFR 42.102 - Definitions, general.
Code of Federal Regulations, 2010 CFR
2010-01-01
... REGULATIONS STANDARDS FOR CONDITION OF FOOD CONTAINERS Definitions § 42.102 Definitions, general. For the... plan consists of first and total sample sizes with associated acceptance and rejection criteria. The... collection of filled food containers of the same size, type, and style. The term shall mean “inspection lot...
Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning
Ichnowski, Jeffrey; Prins, Jan F.; Alterovitz, Ron
2014-01-01
We present CARRT* (Cache-Aware Rapidly Exploring Random Tree*), an asymptotically optimal sampling-based motion planner that significantly reduces motion planning computation time by effectively utilizing the cache memory hierarchy of modern central processing units (CPUs). CARRT* can account for the CPU’s cache size in a manner that keeps its working dataset in the cache. The motion planner progressively subdivides the robot’s configuration space into smaller regions as the number of configuration samples rises. By focusing configuration exploration in a region for periods of time, nearest neighbor searching is accelerated since the working dataset is small enough to fit in the cache. CARRT* also rewires the motion planning graph in a manner that complements the cache-aware subdivision strategy to more quickly refine the motion planning graph toward optimality. We demonstrate the performance benefit of our cache-aware motion planning approach for scenarios involving a point robot as well as the Rethink Robotics Baxter robot. PMID:25419474
Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning.
Ichnowski, Jeffrey; Prins, Jan F; Alterovitz, Ron
2014-05-01
We present CARRT* (Cache-Aware Rapidly Exploring Random Tree*), an asymptotically optimal sampling-based motion planner that significantly reduces motion planning computation time by effectively utilizing the cache memory hierarchy of modern central processing units (CPUs). CARRT* can account for the CPU's cache size in a manner that keeps its working dataset in the cache. The motion planner progressively subdivides the robot's configuration space into smaller regions as the number of configuration samples rises. By focusing configuration exploration in a region for periods of time, nearest neighbor searching is accelerated since the working dataset is small enough to fit in the cache. CARRT* also rewires the motion planning graph in a manner that complements the cache-aware subdivision strategy to more quickly refine the motion planning graph toward optimality. We demonstrate the performance benefit of our cache-aware motion planning approach for scenarios involving a point robot as well as the Rethink Robotics Baxter robot.
Ngamjarus, Chetta; Chongsuvivatwong, Virasakdi; McNeil, Edward; Holling, Heinz
2017-01-01
Sample size determination usually is taught based on theory and is difficult to understand. Using a smartphone application to teach sample size calculation ought to be more attractive to students than using lectures only. This study compared levels of understanding of sample size calculations for research studies between participants attending a lecture only versus lecture combined with using a smartphone application to calculate sample sizes, to explore factors affecting level of post-test score after training sample size calculation, and to investigate participants’ attitude toward a sample size application. A cluster-randomized controlled trial involving a number of health institutes in Thailand was carried out from October 2014 to March 2015. A total of 673 professional participants were enrolled and randomly allocated to one of two groups, namely, 341 participants in 10 workshops to control group and 332 participants in 9 workshops to intervention group. Lectures on sample size calculation were given in the control group, while lectures using a smartphone application were supplied to the test group. Participants in the intervention group had better learning of sample size calculation (2.7 points out of maximnum 10 points, 95% CI: 24 - 2.9) than the participants in the control group (1.6 points, 95% CI: 1.4 - 1.8). Participants doing research projects had a higher post-test score than those who did not have a plan to conduct research projects (0.9 point, 95% CI: 0.5 - 1.4). The majority of the participants had a positive attitude towards the use of smartphone application for learning sample size calculation.
Methodological issues with adaptation of clinical trial design.
Hung, H M James; Wang, Sue-Jane; O'Neill, Robert T
2006-01-01
Adaptation of clinical trial design generates many issues that have not been resolved for practical applications, though statistical methodology has advanced greatly. This paper focuses on some methodological issues. In one type of adaptation such as sample size re-estimation, only the postulated value of a parameter for planning the trial size may be altered. In another type, the originally intended hypothesis for testing may be modified using the internal data accumulated at an interim time of the trial, such as changing the primary endpoint and dropping a treatment arm. For sample size re-estimation, we make a contrast between an adaptive test weighting the two-stage test statistics with the statistical information given by the original design and the original sample mean test with a properly corrected critical value. We point out the difficulty in planning a confirmatory trial based on the crude information generated by exploratory trials. In regards to selecting a primary endpoint, we argue that the selection process that allows switching from one endpoint to the other with the internal data of the trial is not very likely to gain a power advantage over the simple process of selecting one from the two endpoints by testing them with an equal split of alpha (Bonferroni adjustment). For dropping a treatment arm, distributing the remaining sample size of the discontinued arm to other treatment arms can substantially improve the statistical power of identifying a superior treatment arm in the design. A common difficult methodological issue is that of how to select an adaptation rule in the trial planning stage. Pre-specification of the adaptation rule is important for the practicality consideration. Changing the originally intended hypothesis for testing with the internal data generates great concerns to clinical trial researchers.
Sample size considerations for clinical research studies in nuclear cardiology.
Chiuzan, Cody; West, Erin A; Duong, Jimmy; Cheung, Ken Y K; Einstein, Andrew J
2015-12-01
Sample size calculation is an important element of research design that investigators need to consider in the planning stage of the study. Funding agencies and research review panels request a power analysis, for example, to determine the minimum number of subjects needed for an experiment to be informative. Calculating the right sample size is crucial to gaining accurate information and ensures that research resources are used efficiently and ethically. The simple question "How many subjects do I need?" does not always have a simple answer. Before calculating the sample size requirements, a researcher must address several aspects, such as purpose of the research (descriptive or comparative), type of samples (one or more groups), and data being collected (continuous or categorical). In this article, we describe some of the most frequent methods for calculating the sample size with examples from nuclear cardiology research, including for t tests, analysis of variance (ANOVA), non-parametric tests, correlation, Chi-squared tests, and survival analysis. For the ease of implementation, several examples are also illustrated via user-friendly free statistical software.
Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed
NASA Astrophysics Data System (ADS)
Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi
2010-05-01
To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.
ERIC Educational Resources Information Center
Baughman, Steven A., Ed.; Curry, Elizabeth A., Ed.
As interlibrary cooperation has proliferated in the last several decades, multitype library organizations and systems have emerged as important forces in librarianship. The need for thoughtful and organized strategic planning is an important cornerstone for the success of organizations of all sizes. Part of a project by the Interlibrary…
Antecedents and Consequences of Retirement Planning and Decision-Making: A Meta-Analysis and Model
ERIC Educational Resources Information Center
Topa, Gabriela; Moriano, Juan Antonio; Depolo, Marco; Alcover, Carlos-Maria; Morales, J. Francisco
2009-01-01
In this study, meta-analytic procedures were used to examine the relationships between retirement planning, retirement decision and their antecedent and consequences. Our review of the literature generated 341 independent samples obtained from 99 primary studies with 188,222 participants. A small effect size (ES) for antecedents of retirement…
Optimal sample sizes for the design of reliability studies: power consideration.
Shieh, Gwowen
2014-09-01
Intraclass correlation coefficients are used extensively to measure the reliability or degree of resemblance among group members in multilevel research. This study concerns the problem of the necessary sample size to ensure adequate statistical power for hypothesis tests concerning the intraclass correlation coefficient in the one-way random-effects model. In view of the incomplete and problematic numerical results in the literature, the approximate sample size formula constructed from Fisher's transformation is reevaluated and compared with an exact approach across a wide range of model configurations. These comprehensive examinations showed that the Fisher transformation method is appropriate only under limited circumstances, and therefore it is not recommended as a general method in practice. For advance design planning of reliability studies, the exact sample size procedures are fully described and illustrated for various allocation and cost schemes. Corresponding computer programs are also developed to implement the suggested algorithms.
Neuromuscular dose-response studies: determining sample size.
Kopman, A F; Lien, C A; Naguib, M
2011-02-01
Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.
Sample Size Estimation in Cluster Randomized Educational Trials: An Empirical Bayes Approach
ERIC Educational Resources Information Center
Rotondi, Michael A.; Donner, Allan
2009-01-01
The educational field has now accumulated an extensive literature reporting on values of the intraclass correlation coefficient, a parameter essential to determining the required size of a planned cluster randomized trial. We propose here a simple simulation-based approach including all relevant information that can facilitate this task. An…
Simulating recurrent event data with hazard functions defined on a total time scale.
Jahn-Eimermacher, Antje; Ingel, Katharina; Ozga, Ann-Kathrin; Preussler, Stella; Binder, Harald
2015-03-08
In medical studies with recurrent event data a total time scale perspective is often needed to adequately reflect disease mechanisms. This means that the hazard process is defined on the time since some starting point, e.g. the beginning of some disease, in contrast to a gap time scale where the hazard process restarts after each event. While techniques such as the Andersen-Gill model have been developed for analyzing data from a total time perspective, techniques for the simulation of such data, e.g. for sample size planning, have not been investigated so far. We have derived a simulation algorithm covering the Andersen-Gill model that can be used for sample size planning in clinical trials as well as the investigation of modeling techniques. Specifically, we allow for fixed and/or random covariates and an arbitrary hazard function defined on a total time scale. Furthermore we take into account that individuals may be temporarily insusceptible to a recurrent incidence of the event. The methods are based on conditional distributions of the inter-event times conditional on the total time of the preceeding event or study start. Closed form solutions are provided for common distributions. The derived methods have been implemented in a readily accessible R script. The proposed techniques are illustrated by planning the sample size for a clinical trial with complex recurrent event data. The required sample size is shown to be affected not only by censoring and intra-patient correlation, but also by the presence of risk-free intervals. This demonstrates the need for a simulation algorithm that particularly allows for complex study designs where no analytical sample size formulas might exist. The derived simulation algorithm is seen to be useful for the simulation of recurrent event data that follow an Andersen-Gill model. Next to the use of a total time scale, it allows for intra-patient correlation and risk-free intervals as are often observed in clinical trial data. Its application therefore allows the simulation of data that closely resemble real settings and thus can improve the use of simulation studies for designing and analysing studies.
Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.
Wang, Zuozhen
2018-01-01
Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.
An internal pilot design for prospective cancer screening trials with unknown disease prevalence.
Brinton, John T; Ringham, Brandy M; Glueck, Deborah H
2015-10-13
For studies that compare the diagnostic accuracy of two screening tests, the sample size depends on the prevalence of disease in the study population, and on the variance of the outcome. Both parameters may be unknown during the design stage, which makes finding an accurate sample size difficult. To solve this problem, we propose adapting an internal pilot design. In this adapted design, researchers will accrue some percentage of the planned sample size, then estimate both the disease prevalence and the variances of the screening tests. The updated estimates of the disease prevalence and variance are used to conduct a more accurate power and sample size calculation. We demonstrate that in large samples, the adapted internal pilot design produces no Type I inflation. For small samples (N less than 50), we introduce a novel adjustment of the critical value to control the Type I error rate. We apply the method to two proposed prospective cancer screening studies: 1) a small oral cancer screening study in individuals with Fanconi anemia and 2) a large oral cancer screening trial. Conducting an internal pilot study without adjusting the critical value can cause Type I error rate inflation in small samples, but not in large samples. An internal pilot approach usually achieves goal power and, for most studies with sample size greater than 50, requires no Type I error correction. Further, we have provided a flexible and accurate approach to bound Type I error below a goal level for studies with small sample size.
Herzog, Sereina A; Low, Nicola; Berghold, Andrea
2015-06-19
The success of an intervention to prevent the complications of an infection is influenced by the natural history of the infection. Assumptions about the temporal relationship between infection and the development of sequelae can affect the predicted effect size of an intervention and the sample size calculation. This study investigates how a mathematical model can be used to inform sample size calculations for a randomised controlled trial (RCT) using the example of Chlamydia trachomatis infection and pelvic inflammatory disease (PID). We used a compartmental model to imitate the structure of a published RCT. We considered three different processes for the timing of PID development, in relation to the initial C. trachomatis infection: immediate, constant throughout, or at the end of the infectious period. For each process we assumed that, of all women infected, the same fraction would develop PID in the absence of an intervention. We examined two sets of assumptions used to calculate the sample size in a published RCT that investigated the effect of chlamydia screening on PID incidence. We also investigated the influence of the natural history parameters of chlamydia on the required sample size. The assumed event rates and effect sizes used for the sample size calculation implicitly determined the temporal relationship between chlamydia infection and PID in the model. Even small changes in the assumed PID incidence and relative risk (RR) led to considerable differences in the hypothesised mechanism of PID development. The RR and the sample size needed per group also depend on the natural history parameters of chlamydia. Mathematical modelling helps to understand the temporal relationship between an infection and its sequelae and can show how uncertainties about natural history parameters affect sample size calculations when planning a RCT.
Statistical power analysis in wildlife research
Steidl, R.J.; Hayes, J.P.
1997-01-01
Statistical power analysis can be used to increase the efficiency of research efforts and to clarify research results. Power analysis is most valuable in the design or planning phases of research efforts. Such prospective (a priori) power analyses can be used to guide research design and to estimate the number of samples necessary to achieve a high probability of detecting biologically significant effects. Retrospective (a posteriori) power analysis has been advocated as a method to increase information about hypothesis tests that were not rejected. However, estimating power for tests of null hypotheses that were not rejected with the effect size observed in the study is incorrect; these power estimates will always be a??0.50 when bias adjusted and have no relation to true power. Therefore, retrospective power estimates based on the observed effect size for hypothesis tests that were not rejected are misleading; retrospective power estimates are only meaningful when based on effect sizes other than the observed effect size, such as those effect sizes hypothesized to be biologically significant. Retrospective power analysis can be used effectively to estimate the number of samples or effect size that would have been necessary for a completed study to have rejected a specific null hypothesis. Simply presenting confidence intervals can provide additional information about null hypotheses that were not rejected, including information about the size of the true effect and whether or not there is adequate evidence to 'accept' a null hypothesis as true. We suggest that (1) statistical power analyses be routinely incorporated into research planning efforts to increase their efficiency, (2) confidence intervals be used in lieu of retrospective power analyses for null hypotheses that were not rejected to assess the likely size of the true effect, (3) minimum biologically significant effect sizes be used for all power analyses, and (4) if retrospective power estimates are to be reported, then the I?-level, effect sizes, and sample sizes used in calculations must also be reported.
Virtual planning in orthognathic surgery.
Stokbro, K; Aagaard, E; Torkov, P; Bell, R B; Thygesen, T
2014-08-01
Numerous publications regarding virtual surgical planning protocols have been published, most reporting only one or two case reports to emphasize the hands-on planning. None have systematically reviewed the data published from clinical trials. This systematic review analyzes the precision and accuracy of three-dimensional (3D) virtual surgical planning of orthognathic procedures compared with the actual surgical outcome following orthognathic surgery reported in clinical trials. A systematic search of the current literature was conducted to identify clinical trials with a sample size of more than five patients, comparing the virtual surgical plan with the actual surgical outcome. Search terms revealed a total of 428 titles, out of which only seven articles were included, with a combined sample size of 149 patients. Data were presented in three different ways: intra-class correlation coefficient, 3D surface area with a difference <2mm, and linear and angular differences in three dimensions. Success criteria were set at 2mm mean difference in six articles; 125 of the 133 patients included in these articles were regarded as having had a successful outcome. Due to differences in the presentation of data, meta-analysis was not possible. Virtual planning appears to be an accurate and reproducible method for orthognathic treatment planning. A more uniform presentation of the data is necessary to allow the performance of a meta-analysis. Currently, the software system most often used for 3D virtual planning in clinical trials is SimPlant (Materialise). More independent clinical trials are needed to further validate the precision of virtual planning. Copyright © 2014 International Association of Oral and Maxillofacial Surgeons. All rights reserved.
ERIC Educational Resources Information Center
Chou, Yueh-Ching; Lee, Yue-Chune; Lin, Li-Chan; Kroger, Teppo; Chang, Ai-Ning
2009-01-01
A structured interview survey was conducted in a major city in Taiwan to explore and compare older and younger family primary caregivers' well being and their future caregiving plans for these adults with intellectual disability. The sample size was 315 caregivers who were 55 years or older and who cared for adults with intellectual disability and…
Code of Federal Regulations, 2014 CFR
2014-01-01
... percent, one-sided confidence limit and a sample size of n1. (2) For an energy consumption standard (ECS..., where ECS is the energy consumption standard and t is a statistic based on a 97.5 percent, one-sided...
Code of Federal Regulations, 2013 CFR
2013-01-01
... percent, one-sided confidence limit and a sample size of n1. (2) For an energy consumption standard (ECS..., where ECS is the energy consumption standard and t is a statistic based on a 97.5 percent, one-sided...
Code of Federal Regulations, 2012 CFR
2012-01-01
... percent, one-sided confidence limit and a sample size of n1. (2) For an energy consumption standard (ECS..., where ECS is the energy consumption standard and t is a statistic based on a 97.5 percent, one-sided...
Lara, Jesus R; Hoddle, Mark S
2015-08-01
Oligonychus perseae Tuttle, Baker, & Abatiello is a foliar pest of 'Hass' avocados [Persea americana Miller (Lauraceae)]. The recommended action threshold is 50-100 motile mites per leaf, but this count range and other ecological factors associated with O. perseae infestations limit the application of enumerative sampling plans in the field. Consequently, a comprehensive modeling approach was implemented to compare the practical application of various binomial sampling models for decision-making of O. perseae in California. An initial set of sequential binomial sampling models were developed using three mean-proportion modeling techniques (i.e., Taylor's power law, maximum likelihood, and an empirical model) in combination with two-leaf infestation tally thresholds of either one or two mites. Model performance was evaluated using a robust mite count database consisting of >20,000 Hass avocado leaves infested with varying densities of O. perseae and collected from multiple locations. Operating characteristic and average sample number results for sequential binomial models were used as the basis to develop and validate a standardized fixed-size binomial sampling model with guidelines on sample tree and leaf selection within blocks of avocado trees. This final validated model requires a leaf sampling cost of 30 leaves and takes into account the spatial dynamics of O. perseae to make reliable mite density classifications for a 50-mite action threshold. Recommendations for implementing this fixed-size binomial sampling plan to assess densities of O. perseae in commercial California avocado orchards are discussed. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Kabaluk, J Todd; Binns, Michael R; Vernon, Robert S
2006-06-01
Counts of green peach aphid, Myzus persicae (Sulzer) (Hemiptera: Aphididae), in potato, Solanum tuberosum L., fields were used to evaluate the performance of the sampling plan from a pest management company. The counts were further used to develop a binomial sampling method, and both full count and binomial plans were evaluated using operating characteristic curves. Taylor's power law provided a good fit of the data (r2 = 0.95), with the relationship between the variance (s2) and mean (m) as ln(s2) = 1.81(+/- 0.02) + 1.55(+/- 0.01) ln(m). A binomial sampling method was developed using the empirical model ln(m) = c + dln(-ln(1 - P(T))), to which the data fit well for tally numbers (T) of 0, 1, 3, 5, 7, and 10. Although T = 3 was considered the most reasonable given its operating characteristics and presumed ease of classification above or below critical densities (i.e., action thresholds) of one and 10 M. persicae per leaf, the full count method is shown to be superior. The mean number of sample sites per field visit by the pest management company was 42 +/- 19, with more than one-half (54%) of the field visits involving sampling 31-50 sample sites, which was acceptable in the context of operating characteristic curves for a critical density of 10 M. persicae per leaf. Based on operating characteristics, actual sample sizes used by the pest management company can be reduced by at least 50%, on average, for a critical density of 10 M. persicae per leaf. For a critical density of one M. persicae per leaf used to avert the spread of potato leaf roll virus, sample sizes from 50 to 100 were considered more suitable.
Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie
2013-08-01
The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.
Minetti, Andrea; Riera-Montes, Margarita; Nackers, Fabienne; Roederer, Thomas; Koudika, Marie Hortense; Sekkenes, Johanne; Taconet, Aurore; Fermon, Florence; Touré, Albouhary; Grais, Rebecca F; Checchi, Francesco
2012-10-12
Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes.
Pritchett, Yili; Jemiai, Yannis; Chang, Yuchiao; Bhan, Ishir; Agarwal, Rajiv; Zoccali, Carmine; Wanner, Christoph; Lloyd-Jones, Donald; Cannata-Andía, Jorge B; Thompson, Taylor; Appelbaum, Evan; Audhya, Paul; Andress, Dennis; Zhang, Wuyan; Solomon, Scott; Manning, Warren J; Thadhani, Ravi
2011-04-01
Chronic kidney disease is associated with a marked increase in risk for left ventricular hypertrophy and cardiovascular mortality compared with the general population. Therapy with vitamin D receptor activators has been linked with reduced mortality in chronic kidney disease and an improvement in left ventricular hypertrophy in animal studies. PRIMO (Paricalcitol capsules benefits in Renal failure Induced cardia MOrbidity) is a multinational, multicenter randomized controlled trial to assess the effects of paricalcitol (a selective vitamin D receptor activator) on mild to moderate left ventricular hypertrophy in patients with chronic kidney disease. Subjects with mild-moderate chronic kidney disease are randomized to paricalcitol or placebo after confirming left ventricular hypertrophy using a cardiac echocardiogram. Cardiac magnetic resonance imaging is then used to assess left ventricular mass index at baseline, 24 and 48 weeks, which is the primary efficacy endpoint of the study. Because of limited prior data to estimate sample size, a maximum information group sequential design with sample size re-estimation is implemented to allow sample size adjustment based on the nuisance parameter estimated using the interim data. An interim efficacy analysis is planned at a pre-specified time point conditioned on the status of enrollment. The decision to increase sample size depends on the observed treatment effect. A repeated measures analysis model, using available data at Week 24 and 48 with a backup model of an ANCOVA analyzing change from baseline to the final nonmissing observation, are pre-specified to evaluate the treatment effect. Gamma-family of spending function is employed to control family-wise Type I error rate as stopping for success is planned in the interim efficacy analysis. If enrollment is slower than anticipated, the smaller sample size used in the interim efficacy analysis and the greater percent of missing week 48 data might decrease the parameter estimation accuracy, either for the nuisance parameter or for the treatment effect, which might in turn affect the interim decision-making. The application of combining a group sequential design with a sample-size re-estimation in clinical trial design has the potential to improve efficiency and to increase the probability of trial success while ensuring integrity of the study.
Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic
2016-05-30
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
21 CFR 161.173 - Canned wet pack shrimp in transparent or nontransparent containers.
Code of Federal Regulations, 2010 CFR
2010-04-01
... (dorsal tract, back vein, or sand vein). (ii) Deveined shrimp containing not less than 95 percent by...) Acceptable quality level (AQL). The maximum percent of defective sample units permitted in a lot that will be accepted approximately 95 percent of the time. (ii) Sampling plans: Acceptable Quality Level 6.5 Lot size...
Burkness, Eric C; Hutchison, W D
2009-10-01
Populations of cabbage looper, Trichoplusiani (Lepidoptera: Noctuidae), were sampled in experimental plots and commercial fields of cabbage (Brasicca spp.) in Minnesota during 1998-1999 as part of a larger effort to implement an integrated pest management program. Using a resampling approach and the Wald's sequential probability ratio test, sampling plans with different sampling parameters were evaluated using independent presence/absence and enumerative data. Evaluations and comparisons of the different sampling plans were made based on the operating characteristic and average sample number functions generated for each plan and through the use of a decision probability matrix. Values for upper and lower decision boundaries, sequential error rates (alpha, beta), and tally threshold were modified to determine parameter influence on the operating characteristic and average sample number functions. The following parameters resulted in the most desirable operating characteristic and average sample number functions; action threshold of 0.1 proportion of plants infested, tally threshold of 1, alpha = beta = 0.1, upper boundary of 0.15, lower boundary of 0.05, and resampling with replacement. We found that sampling parameters can be modified and evaluated using resampling software to achieve desirable operating characteristic and average sample number functions. Moreover, management of T. ni by using binomial sequential sampling should provide a good balance between cost and reliability by minimizing sample size and maintaining a high level of correct decisions (>95%) to treat or not treat.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, William A., E-mail: whall4@emory.edu; Winship Cancer Institute, Emory University, Atlanta, Georgia; Mikell, John L.
2013-05-01
Purpose: We assessed the accuracy of abdominal magnetic resonance imaging (MRI) for determining tumor size by comparing the preoperative contrast-enhanced T1-weighted gradient echo (3-dimensional [3D] volumetric interpolated breath-hold [VIBE]) MRI tumor size with pathologic specimen size. Methods and Materials: The records of 92 patients who had both preoperative contrast-enhanced 3D VIBE MRI images and detailed pathologic specimen measurements were available for review. Primary tumor size from the MRI was independently measured by a single diagnostic radiologist (P.M.) who was blinded to the pathology reports. Pathologic tumor measurements from gross specimens were obtained from the pathology reports. The maximum dimensions ofmore » tumor measured in any plane on the MRI and the gross specimen were compared. The median difference between the pathology sample and the MRI measurements was calculated. A paired t test was conducted to test for differences between the MRI and pathology measurements. The Pearson correlation coefficient was used to measure the association of disparity between the MRI and pathology sizes with the pathology size. Disparities relative to pathology size were also examined and tested for significance using a 1-sample t test. Results: The median patient age was 64.5 years. The primary site was pancreatic head in 81 patients, body in 4, and tail in 7. Three patients were American Joint Commission on Cancer stage IA, 7 stage IB, 21 stage IIA, 58 stage IIB, and 3 stage III. The 3D VIBE MRI underestimated tumor size by a median difference of 4 mm (range, −34-22 mm). The median largest tumor dimensions on MRI and pathology specimen were 2.65 cm (range, 1.5-9.5 cm) and 3.2 cm (range, 1.3-10 cm), respectively. Conclusions: Contrast-enhanced 3D VIBE MRI underestimates tumor size by 4 mm when compared with pathologic specimen. Advanced abdominal MRI sequences warrant further investigation for radiation therapy planning in pancreatic adenocarcinoma before routine integration into the treatment planning process.« less
Walters, Stephen J; Bonacho Dos Anjos Henriques-Cadby, Inês; Bortolami, Oscar; Flight, Laura; Hind, Daniel; Jacques, Richard M; Knox, Christopher; Nadin, Ben; Rothwell, Joanne; Surtees, Michael; Julious, Steven A
2017-03-20
Substantial amounts of public funds are invested in health research worldwide. Publicly funded randomised controlled trials (RCTs) often recruit participants at a slower than anticipated rate. Many trials fail to reach their planned sample size within the envisaged trial timescale and trial funding envelope. To review the consent, recruitment and retention rates for single and multicentre randomised control trials funded and published by the UK's National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme. HTA reports of individually randomised single or multicentre RCTs published from the start of 2004 to the end of April 2016 were reviewed. Information was extracted, relating to the trial characteristics, sample size, recruitment and retention by two independent reviewers. Target sample size and whether it was achieved; recruitment rates (number of participants recruited per centre per month) and retention rates (randomised participants retained and assessed with valid primary outcome data). This review identified 151 individually RCTs from 787 NIHR HTA reports. The final recruitment target sample size was achieved in 56% (85/151) of the RCTs and more than 80% of the final target sample size was achieved for 79% of the RCTs (119/151). The median recruitment rate (participants per centre per month) was found to be 0.92 (IQR 0.43-2.79) and the median retention rate (proportion of participants with valid primary outcome data at follow-up) was estimated at 89% (IQR 79-97%). There is considerable variation in the consent, recruitment and retention rates in publicly funded RCTs. Investigators should bear this in mind at the planning stage of their study and not be overly optimistic about their recruitment projections. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Bonacho dos Anjos Henriques-Cadby, Inês; Bortolami, Oscar; Flight, Laura; Hind, Daniel; Knox, Christopher; Nadin, Ben; Rothwell, Joanne; Surtees, Michael; Julious, Steven A
2017-01-01
Background Substantial amounts of public funds are invested in health research worldwide. Publicly funded randomised controlled trials (RCTs) often recruit participants at a slower than anticipated rate. Many trials fail to reach their planned sample size within the envisaged trial timescale and trial funding envelope. Objectives To review the consent, recruitment and retention rates for single and multicentre randomised control trials funded and published by the UK's National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme. Data sources and study selection HTA reports of individually randomised single or multicentre RCTs published from the start of 2004 to the end of April 2016 were reviewed. Data extraction Information was extracted, relating to the trial characteristics, sample size, recruitment and retention by two independent reviewers. Main outcome measures Target sample size and whether it was achieved; recruitment rates (number of participants recruited per centre per month) and retention rates (randomised participants retained and assessed with valid primary outcome data). Results This review identified 151 individually RCTs from 787 NIHR HTA reports. The final recruitment target sample size was achieved in 56% (85/151) of the RCTs and more than 80% of the final target sample size was achieved for 79% of the RCTs (119/151). The median recruitment rate (participants per centre per month) was found to be 0.92 (IQR 0.43–2.79) and the median retention rate (proportion of participants with valid primary outcome data at follow-up) was estimated at 89% (IQR 79–97%). Conclusions There is considerable variation in the consent, recruitment and retention rates in publicly funded RCTs. Investigators should bear this in mind at the planning stage of their study and not be overly optimistic about their recruitment projections. PMID:28320800
Blinded and unblinded internal pilot study designs for clinical trials with count data.
Schneider, Simon; Schmidli, Heinz; Friede, Tim
2013-07-01
Internal pilot studies are a popular design feature to address uncertainties in the sample size calculations caused by vague information on nuisance parameters. Despite their popularity, only very recently blinded sample size reestimation procedures for trials with count data were proposed and their properties systematically investigated. Although blinded procedures are favored by regulatory authorities, practical application is somewhat limited by fears that blinded procedures are prone to bias if the treatment effect was misspecified in the planning. Here, we compare unblinded and blinded procedures with respect to bias, error rates, and sample size distribution. We find that both procedures maintain the desired power and that the unblinded procedure is slightly liberal whereas the actual significance level of the blinded procedure is close to the nominal level. Furthermore, we show that in situations where uncertainty about the assumed treatment effect exists, the blinded estimator of the control event rate is biased in contrast to the unblinded estimator, which results in differences in mean sample sizes in favor of the unblinded procedure. However, these differences are rather small compared to the deviations of the mean sample sizes from the sample size required to detect the true, but unknown effect. We demonstrate that the variation of the sample size resulting from the blinded procedure is in many practically relevant situations considerably smaller than the one of the unblinded procedures. The methods are extended to overdispersed counts using a quasi-likelihood approach and are illustrated by trials in relapsing multiple sclerosis. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Estimating population sizes for elusive animals: the forest elephants of Kakum National Park, Ghana.
Eggert, L S; Eggert, J A; Woodruff, D S
2003-06-01
African forest elephants are difficult to observe in the dense vegetation, and previous studies have relied upon indirect methods to estimate population sizes. Using multilocus genotyping of noninvasively collected samples, we performed a genetic survey of the forest elephant population at Kakum National Park, Ghana. We estimated population size, sex ratio and genetic variability from our data, then combined this information with field observations to divide the population into age groups. Our population size estimate was very close to that obtained using dung counts, the most commonly used indirect method of estimating the population sizes of forest elephant populations. As their habitat is fragmented by expanding human populations, management will be increasingly important to the persistence of forest elephant populations. The data that can be obtained from noninvasively collected samples will help managers plan for the conservation of this keystone species.
Lanata, C F; Black, R E
1991-01-01
Traditional survey methods, which are generally costly and time-consuming, usually provide information at the regional or national level only. The utilization of lot quality assurance sampling (LQAS) methodology, developed in industry for quality control, makes it possible to use small sample sizes when conducting surveys in small geographical or population-based areas (lots). This article describes the practical use of LQAS for conducting health surveys to monitor health programmes in developing countries. Following a brief description of the method, the article explains how to build a sample frame and conduct the sampling to apply LQAS under field conditions. A detailed description of the procedure for selecting a sampling unit to monitor the health programme and a sample size is given. The sampling schemes utilizing LQAS applicable to health surveys, such as simple- and double-sampling schemes, are discussed. The interpretation of the survey results and the planning of subsequent rounds of LQAS surveys are also discussed. When describing the applicability of LQAS in health surveys in developing countries, the article considers current limitations for its use by health planners in charge of health programmes, and suggests ways to overcome these limitations through future research. It is hoped that with increasing attention being given to industrial sampling plans in general, and LQAS in particular, their utilization to monitor health programmes will provide health planners in developing countries with powerful techniques to help them achieve their health programme targets.
2012-01-01
Background Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. Methods We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. Results VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Conclusions Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes. PMID:23057445
Treatment Trials for Neonatal Seizures: The Effect of Design on Sample Size
Stevenson, Nathan J.; Boylan, Geraldine B.; Hellström-Westas, Lena; Vanhatalo, Sampsa
2016-01-01
Neonatal seizures are common in the neonatal intensive care unit. Clinicians treat these seizures with several anti-epileptic drugs (AEDs) to reduce seizures in a neonate. Current AEDs exhibit sub-optimal efficacy and several randomized control trials (RCT) of novel AEDs are planned. The aim of this study was to measure the influence of trial design on the required sample size of a RCT. We used seizure time courses from 41 term neonates with hypoxic ischaemic encephalopathy to build seizure treatment trial simulations. We used five outcome measures, three AED protocols, eight treatment delays from seizure onset (Td) and four levels of trial AED efficacy to simulate different RCTs. We performed power calculations for each RCT design and analysed the resultant sample size. We also assessed the rate of false positives, or placebo effect, in typical uncontrolled studies. We found that the false positive rate ranged from 5 to 85% of patients depending on RCT design. For controlled trials, the choice of outcome measure had the largest effect on sample size with median differences of 30.7 fold (IQR: 13.7–40.0) across a range of AED protocols, Td and trial AED efficacy (p<0.001). RCTs that compared the trial AED with positive controls required sample sizes with a median fold increase of 3.2 (IQR: 1.9–11.9; p<0.001). Delays in AED administration from seizure onset also increased the required sample size 2.1 fold (IQR: 1.7–2.9; p<0.001). Subgroup analysis showed that RCTs in neonates treated with hypothermia required a median fold increase in sample size of 2.6 (IQR: 2.4–3.0) compared to trials in normothermic neonates (p<0.001). These results show that RCT design has a profound influence on the required sample size. Trials that use a control group, appropriate outcome measure, and control for differences in Td between groups in analysis will be valid and minimise sample size. PMID:27824913
Pataky, Todd C; Robinson, Mark A; Vanrenterghem, Jos
2018-01-03
Statistical power assessment is an important component of hypothesis-driven research but until relatively recently (mid-1990s) no methods were available for assessing power in experiments involving continuum data and in particular those involving one-dimensional (1D) time series. The purpose of this study was to describe how continuum-level power analyses can be used to plan hypothesis-driven biomechanics experiments involving 1D data. In particular, we demonstrate how theory- and pilot-driven 1D effect modeling can be used for sample-size calculations for both single- and multi-subject experiments. For theory-driven power analysis we use the minimum jerk hypothesis and single-subject experiments involving straight-line, planar reaching. For pilot-driven power analysis we use a previously published knee kinematics dataset. Results show that powers on the order of 0.8 can be achieved with relatively small sample sizes, five and ten for within-subject minimum jerk analysis and between-subject knee kinematics, respectively. However, the appropriate sample size depends on a priori justifications of biomechanical meaning and effect size. The main advantage of the proposed technique is that it encourages a priori justification regarding the clinical and/or scientific meaning of particular 1D effects, thereby robustly structuring subsequent experimental inquiry. In short, it shifts focus from a search for significance to a search for non-rejectable hypotheses. Copyright © 2017 Elsevier Ltd. All rights reserved.
1983-05-01
occur. 4) It is also true that during a given time period, at a given base, not all of the people in the sample will actually be available for testing...taken sample sizes into consideration, we currently estimate that with few exceptions, we will have adequate samples to perform the analysis of simple ...aalanced Half Sample Repli- cations (BHSA). His analyses of simple cases have shown that this method is substantially more efficient than the
0-6760 : improved trip generation data for Texas using workplace and special generator surveys.
DOT National Transportation Integrated Search
2014-08-01
Trip generation rates play an important role in : transportation planning, which can help in : making informed decisions about future : transportation investment and design. However, : sometimes the rates are derived from small : sample sizes or may ...
Overview of the Mars Sample Return Earth Entry Vehicle
NASA Technical Reports Server (NTRS)
Dillman, Robert; Corliss, James
2008-01-01
NASA's Mars Sample Return (MSR) project will bring Mars surface and atmosphere samples back to Earth for detailed examination. Langley Research Center's MSR Earth Entry Vehicle (EEV) is a core part of the mission, protecting the sample container during atmospheric entry, descent, and landing. Planetary protection requirements demand a higher reliability from the EEV than for any previous planetary entry vehicle. An overview of the EEV design and preliminary analysis is presented, with a follow-on discussion of recommended future design trade studies to be performed over the next several years in support of an MSR launch in 2018 or 2020. Planned topics include vehicle size for impact protection of a range of sample container sizes, outer mold line changes to achieve surface sterilization during re-entry, micrometeoroid protection, aerodynamic stability, thermal protection, and structural materials selection.
Optimal design in pediatric pharmacokinetic and pharmacodynamic clinical studies.
Roberts, Jessica K; Stockmann, Chris; Balch, Alfred; Yu, Tian; Ward, Robert M; Spigarelli, Michael G; Sherwin, Catherine M T
2015-03-01
It is not trivial to conduct clinical trials with pediatric participants. Ethical, logistical, and financial considerations add to the complexity of pediatric studies. Optimal design theory allows investigators the opportunity to apply mathematical optimization algorithms to define how to structure their data collection to answer focused research questions. These techniques can be used to determine an optimal sample size, optimal sample times, and the number of samples required for pharmacokinetic and pharmacodynamic studies. The aim of this review is to demonstrate how to determine optimal sample size, optimal sample times, and the number of samples required from each patient by presenting specific examples using optimal design tools. Additionally, this review aims to discuss the relative usefulness of sparse vs rich data. This review is intended to educate the clinician, as well as the basic research scientist, whom plan on conducting a pharmacokinetic/pharmacodynamic clinical trial in pediatric patients. © 2015 John Wiley & Sons Ltd.
Capturing heterogeneity: The role of a study area's extent for estimating mean throughfall
NASA Astrophysics Data System (ADS)
Zimmermann, Alexander; Voss, Sebastian; Metzger, Johanna Clara; Hildebrandt, Anke; Zimmermann, Beate
2016-11-01
The selection of an appropriate spatial extent of a sampling plot is one among several important decisions involved in planning a throughfall sampling scheme. In fact, the choice of the extent may determine whether or not a study can adequately characterize the hydrological fluxes of the studied ecosystem. Previous attempts to optimize throughfall sampling schemes focused on the selection of an appropriate sample size, support, and sampling design, while comparatively little attention has been given to the role of the extent. In this contribution, we investigated the influence of the extent on the representativeness of mean throughfall estimates for three forest ecosystems of varying stand structure. Our study is based on virtual sampling of simulated throughfall fields. We derived these fields from throughfall data sampled in a simply structured forest (young tropical forest) and two heterogeneous forests (old tropical forest, unmanaged mixed European beech forest). We then sampled the simulated throughfall fields with three common extents and various sample sizes for a range of events and for accumulated data. Our findings suggest that the size of the study area should be carefully adapted to the complexity of the system under study and to the required temporal resolution of the throughfall data (i.e. event-based versus accumulated). Generally, event-based sampling in complex structured forests (conditions that favor comparatively long autocorrelations in throughfall) requires the largest extents. For event-based sampling, the choice of an appropriate extent can be as important as using an adequate sample size.
Mapping South San Francisco Bay's seabed diversity for use in wetland restoration planning
Fregoso, Theresa A.; Jaffe, B.; Rathwell, G.; Collins, W.; Rhynas, K.; Tomlin, V.; Sullivan, S.
2006-01-01
Data for an acoustic seabed classification were collected as a part of a California Coastal Conservancy funded bathymetric survey of South Bay in early 2005. A QTC VIEW seabed classification system recorded echoes from a sungle bean 50 kHz echosounder. Approximately 450,000 seabed classification records were generated from an are of of about 30 sq. miles. Ten district acoustic classes were identified through an unsupervised classification system using principle component and cluster analyses. One hundred and sixty-one grab samples and forty-five benthic community composition data samples collected in the study area shortly before and after the seabed classification survey, further refined the ten classes into groups based on grain size. A preliminary map of surficial grain size of South Bay was developed from the combination of the seabed classification and the grab and benthic samples. The initial seabed classification map, the grain size map, and locations of sediment samples will be displayed along with the methods of acousitc seabed classification.
The Importance and Role of Intracluster Correlations in Planning Cluster Trials
Preisser, John S.; Reboussin, Beth A.; Song, Eun-Young; Wolfson, Mark
2008-01-01
There is increasing recognition of the critical role of intracluster correlations of health behavior outcomes in cluster intervention trials. This study examines the estimation, reporting, and use of intracluster correlations in planning cluster trials. We use an estimating equations approach to estimate the intracluster correlations corresponding to the multiple-time-point nested cross-sectional design. Sample size formulae incorporating 2 types of intracluster correlations are examined for the purpose of planning future trials. The traditional intracluster correlation is the correlation among individuals within the same community at a specific time point. A second type is the correlation among individuals within the same community at different time points. For a “time × condition” analysis of a pretest–posttest nested cross-sectional trial design, we show that statistical power considerations based upon a posttest-only design generally are not an adequate substitute for sample size calculations that incorporate both types of intracluster correlations. Estimation, reporting, and use of intracluster correlations are illustrated for several dichotomous measures related to underage drinking collected as part of a large nonrandomized trial to enforce underage drinking laws in the United States from 1998 to 2004. PMID:17879427
NASA Technical Reports Server (NTRS)
Baird, A. K.; Castro, A. J.; Clark, B. C.; Toulmin, P., III; Rose, H., Jr.; Keil, K.; Gooding, J. L.
1977-01-01
Ten samples of Mars regolith material (six on Viking Lander 1 and four on Viking Lander 2) have been delivered to the X ray fluorescence spectrometers as of March 31, 1977. An additional six samples at least are planned for acquisition in the remaining Extended Mission (to January 1979) for each lander. All samples acquired are Martian fines from the near surface (less than 6-cm depth) of the landing sites except the latest on Viking Lander 1, which is fine material from the bottom of a trench dug to a depth of 25 cm. Several attempts on each lander to acquire fresh rock material (in pebble sizes) for analysis have yielded only cemented surface crustal material (duricrust). Laboratory simulation and experimentation are required both for mission planning of sampling and for interpretation of data returned from Mars. This paper is concerned with the rationale for sample site selections, surface sampler operations, and the supportive laboratory studies needed to interpret X ray results from Mars.
Firefighter Hand Anthropometry and Structural Glove Sizing: A New Perspective.
Hsiao, Hongwei; Whitestone, Jennifer; Kau, Tsui-Ying; Hildreth, Brooke
2015-12-01
We evaluated the current use and fit of structural firefighting gloves and developed an improved sizing scheme that better accommodates the U.S. firefighter population. Among surveys, 24% to 30% of men and 31% to 62% of women reported experiencing problems with the fit or bulkiness of their structural firefighting gloves. An age-, race/ethnicity-, and gender-stratified sample of 863 male and 88 female firefighters across the United States participated in the study. Fourteen hand dimensions relevant to glove design were measured. A cluster analysis of the hand dimensions was performed to explore options for an improved sizing scheme. The current national standard structural firefighting glove-sizing scheme underrepresents firefighter hand size range and shape variation. In addition, mismatch between existing sizing specifications and hand characteristics, such as hand dimensions, user selection of glove size, and the existing glove sizing specifications, is significant. An improved glove-sizing plan based on clusters of overall hand size and hand/finger breadth-to-length contrast has been developed. This study presents the most up-to-date firefighter hand anthropometry and a new perspective on glove accommodation. The new seven-size system contains narrower variations (standard deviations) for almost all dimensions for each glove size than the current sizing practices. The proposed science-based sizing plan for structural firefighting gloves provides a step-forward perspective (i.e., including two women hand model-based sizes and two wide-palm sizes for men) for glove manufacturers to advance firefighter hand protection. © 2015, Human Factors and Ergonomics Society.
Tran, Anh K; Koch, Robert L
2017-06-01
The soybean aphid, Aphis glycines Matsumura, is an economically important soybean pest. Many studies have demonstrated that predatory insects are important in suppressing A. glycines population growth. However, to improve the utilization of predators in A. glycines management, sampling plans need to be developed and validated for predators. Aphid predators were sampled in soybean fields near Rosemount, Minnesota, from 2006-2007 and 2013-2015 with sample sizes of 20-80 plants. Sampling plans were developed for Orius insidiosus (Say), Harmonia axyridis (Pallas), and all aphidophagous Coccinellidae species combined. Taylor's power law parameters from the regression of log variance versus log mean suggested aggregated spatial patterns for immature and adult stages combined for O. insidiosus, H. axyridis, and Coccinellidae in soybean fields. Using the parameters from Taylor's power law and Green's method, sequential fixed-precision sampling plans were developed to estimate the density for each predator taxon at desired precision levels of 0.10 and 0.25. To achieve a desired precision of 0.10 and 0.25, the average sample number (ASN) ranged from 398-713 and 64-108 soybean plants, respectively, for all species. Resulting ASNs were relatively large and assumed impractical for most purposes; therefore, the desired precision levels were adjusted to determine the level of precision associated with a more practical ASN. Final analysis indicated an ASN of 38 soybean plants provided precision of 0.32-0.40 for the predators. Development of sampling plans should provide guidance for improved estimation of predator densities for A. glycines pest management programs and for research purposes. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Frison, Severine; Kerac, Marko; Checchi, Francesco; Nicholas, Jennifer
2017-01-01
The assessment of the prevalence of acute malnutrition in children under five is widely used for the detection of emergencies, planning interventions, advocacy, and monitoring and evaluation. This study examined PROBIT Methods which convert parameters (mean and standard deviation (SD)) of a normally distributed variable to a cumulative probability below any cut-off to estimate acute malnutrition in children under five using Middle-Upper Arm Circumference (MUAC). We assessed the performance of: PROBIT Method I, with mean MUAC from the survey sample and MUAC SD from a database of previous surveys; and PROBIT Method II, with mean and SD of MUAC observed in the survey sample. Specifically, we generated sub-samples from 852 survey datasets, simulating 100 surveys for eight sample sizes. Overall the methods were tested on 681 600 simulated surveys. PROBIT methods relying on sample sizes as small as 50 had better performance than the classic method for estimating and classifying the prevalence of acute malnutrition. They had better precision in the estimation of acute malnutrition for all sample sizes and better coverage for smaller sample sizes, while having relatively little bias. They classified situations accurately for a threshold of 5% acute malnutrition. Both PROBIT methods had similar outcomes. PROBIT Methods have a clear advantage in the assessment of acute malnutrition prevalence based on MUAC, compared to the classic method. Their use would require much lower sample sizes, thus enable great time and resource savings and permit timely and/or locally relevant prevalence estimates of acute malnutrition for a swift and well-targeted response.
Steigen, Terje K; Claudio, Cheryl; Abbott, David; Schulzer, Michael; Burton, Jeff; Tymchak, Wayne; Buller, Christopher E; John Mancini, G B
2008-06-01
To assess reproducibility of core laboratory performance and impact on sample size calculations. Little information exists about overall reproducibility of core laboratories in contradistinction to performance of individual technicians. Also, qualitative parameters are being adjudicated increasingly as either primary or secondary end-points. The comparative impact of using diverse indexes on sample sizes has not been previously reported. We compared initial and repeat assessments of five quantitative parameters [e.g., minimum lumen diameter (MLD), ejection fraction (EF), etc.] and six qualitative parameters [e.g., TIMI myocardial perfusion grade (TMPG) or thrombus grade (TTG), etc.], as performed by differing technicians and separated by a year or more. Sample sizes were calculated from these results. TMPG and TTG were also adjudicated by a second core laboratory. MLD and EF were the most reproducible, yielding the smallest sample size calculations, whereas percent diameter stenosis and centerline wall motion require substantially larger trials. Of the qualitative parameters, all except TIMI flow grade gave reproducibility characteristics yielding sample sizes of many 100's of patients. Reproducibility of TMPG and TTG was only moderately good both within and between core laboratories, underscoring an intrinsic difficulty in assessing these. Core laboratories can be shown to provide reproducibility performance that is comparable to performance commonly ascribed to individual technicians. The differences in reproducibility yield huge differences in sample size when comparing quantitative and qualitative parameters. TMPG and TTG are intrinsically difficult to assess and conclusions based on these parameters should arise only from very large trials.
Kafeshani, Farzaneh Alizadeh; Rajabpour, Ali; Aghajanzadeh, Sirous; Gholamian, Esmaeil; Farkhari, Mohammad
2018-04-02
Aphis spiraecola Patch, Aphis gossypii Glover, and Toxoptera aurantii Boyer de Fonscolombe are three important aphid pests of citrus orchards. In this study, spatial distributions of the aphids on two orange species, Satsuma mandarin and Thomson navel, were evaluated using Taylor's power law and Iwao's patchiness. In addition, a fixed-precision sequential sampling plant was developed for each species on the host plant by Green's model at precision levels of 0.25 and 0.1. The results revealed that spatial distribution parameters and therefore the sampling plan were significantly different according to aphid and host plant species. Taylor's power law provides a better fit for the data than Iwao's patchiness regression. Except T. aurantii on Thomson navel orange, spatial distribution patterns of the aphids were aggregative on both citrus. T. aurantii had regular dispersion pattern on Thomson navel orange. Optimum sample size of the aphids varied from 30-2061 and 1-1622 shoots on Satsuma mandarin and Thomson navel orange based on aphid species and desired precision level. Calculated stop lines of the aphid species on Satsuma mandarin and Thomson navel orange ranged from 0.48 to 19 and 0.19 to 80.4 aphids per 24 shoots according to aphid species and desired precision level. The performance of the sampling plan was validated by resampling analysis using resampling for validation of sampling plans (RVSP) software. This sampling program is useful for IPM program of the aphids in citrus orchards.
Visual Sample Plan Version 7.0 User's Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzke, Brett D.; Newburn, Lisa LN; Hathaway, John E.
2014-03-01
User's guide for VSP 7.0 This user's guide describes Visual Sample Plan (VSP) Version 7.0 and provides instructions for using the software. VSP selects the appropriate number and location of environmental samples to ensure that the results of statistical tests performed to provide input to risk decisions have the required confidence and performance. VSP Version 7.0 provides sample-size equations or algorithms needed by specific statistical tests appropriate for specific environmental sampling objectives. It also provides data quality assessment and statistical analysis functions to support evaluation of the data and determine whether the data support decisions regarding sites suspected of contamination.more » The easy-to-use program is highly visual and graphic. VSP runs on personal computers with Microsoft Windows operating systems (XP, Vista, Windows 7, and Windows 8). Designed primarily for project managers and users without expertise in statistics, VSP is applicable to two- and three-dimensional populations to be sampled (e.g., rooms and buildings, surface soil, a defined layer of subsurface soil, water bodies, and other similar applications) for studies of environmental quality. VSP is also applicable for designing sampling plans for assessing chem/rad/bio threat and hazard identification within rooms and buildings, and for designing geophysical surveys for unexploded ordnance (UXO) identification.« less
Sepúlveda, Nuno; Drakeley, Chris
2015-04-03
In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.
INDUSTRIAL RADIOGRAPHY COURSE, INSTRUCTORS' GUIDE. VOLUME 2.
ERIC Educational Resources Information Center
Texas A and M Univ., College Station. Engineering Extension Service.
INFORMATION RELATIVE TO THE LESSON PLANS IN "INDUSTRIAL RADIOGRAPHY COURSE, INSTRUCTOR'S GUIDE, VOLUME I" (VT 003 565) IS PRESENTED ON 52 INFORMATION SHEETS INCLUDING THE SUBJECTS SHIELDING EQUATIONS AND LOGARITHMS, METAL PROPERTIES, FIELD TRIP INSTRUCTIONS FOR STUDENTS, WELDING SYMBOLS AND SIZES, SAMPLE REPORT FORMS, AND TYPICAL SHIPPING…
Thirst distress and interdialytic weight gain: how do they relate?
Jacob, Sheena; Locking-Cusolito, Heather
2004-01-01
Thirst is a frequent and stressful symptom experienced by hemodialysis patients. Several studies have noted a positive relationship between thirst and interdialytic weight gain (IDWG). These factors prompted us to consider ways that we could intervene to reduce thirst and IDWG through an educative, supportive nursing intervention. This paper presents the results of a pilot research project, the purpose of which was to: examine the relationship between thirst distress (the negative symptoms associated with thirst) and IDWG in a sample of our patients, describe patients' strategies for management of thirst, and establish the necessary sample size for the planned intervention study. The pilot research project results showed that in a small sample of 20, there was a mildly positive, though not statistically significant, correlation between thirst distress and IDWG (r = 0.117). Subjects shared a wide variety of thirst management strategies including: limiting salt intake, using ice chips, measuring daily allotment, performing mouth care, eating raw fruits and vegetables, sucking on hard candy and chewing gum. This pilot research project showed that given an alpha of 0.05 and a power of 80%, we will require a sample of 39 subjects to detect a 20% change in IDWG. We will employ these results to plan our intervention study, first by establishing the appropriate sample size and second by incorporating identified patient strategies into an educational pamphlet that will form the basis of our intervention.
Spatial sampling considerations of the CERES (Clouds and Earth Radiant Energy System) instrument
NASA Astrophysics Data System (ADS)
Smith, G. L.; Manalo-Smith, Natividdad; Priestley, Kory
2014-10-01
The CERES (Clouds and Earth Radiant Energy System) instrument is a scanning radiometer with three channels for measuring Earth radiation budget. At present CERES models are operating aboard the Terra, Aqua and Suomi/NPP spacecraft and flights of CERES instruments are planned for the JPSS-1 spacecraft and its successors. CERES scans from one limb of the Earth to the other and back. The footprint size grows with distance from nadir simply due to geometry so that the size of the smallest features which can be resolved from the data increases and spatial sampling errors increase with nadir angle. This paper presents an analysis of the effect of nadir angle on spatial sampling errors of the CERES instrument. The analysis performed in the Fourier domain. Spatial sampling errors are created by smoothing of features which are the size of the footprint and smaller, or blurring, and inadequate sampling, that causes aliasing errors. These spatial sampling errors are computed in terms of the system transfer function, which is the Fourier transform of the point response function, the spacing of data points and the spatial spectrum of the radiance field.
Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, İrem Ersöz
2013-01-01
Objective: The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Study Design: Simulation study. Material and Methods: SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Results: Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. Conclusion: It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values. PMID:25207065
Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, Irem Ersöz
2013-03-01
The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Simulation study. SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values.
Lot quality assurance sampling for screening communities hyperendemic for Schistosoma mansoni.
Rabarijaona, L P; Boisier, P; Ravaoalimalala, V E; Jeanne, I; Roux, J F; Jutand, M A; Salamon, R
2003-04-01
Lot quality assurance sampling (LQAS) was evaluated for rapid low cost identification of communities where Schistosoma mansoni infection was hyperendemic in southern Madagascar. In the study area, S. mansoni infection shows very focused and heterogeneous distribution requiring multifariousness of local surveys. One sampling plan was tested in the field with schoolchildren and several others were simulated in the laboratory. Randomization and stool specimen collection were performed by voluntary teachers under direct supervision of the study staff and no significant problem occurred. As expected from Receiver Operating Characteristic (ROC) curves, all sampling plans allowed correct identification of hyperendemic communities and of most of the hypoendemic ones. Frequent misclassifications occurred for communities with intermediate prevalence and the cheapest plans had very low specificity. The study confirmed that LQAS would be a valuable tool for large scale screening in a country with scarce financial and staff resources. Involving teachers, appeared to be quite feasible and should not lower the reliability of surveys. We recommend that the national schistosomiasis control programme systematically uses LQAS for identification of communities, provided that sample sizes are adapted to the specific epidemiological patterns of S. mansoni infection in the main regions.
Layton, Timothy J; Ryan, Andrew M
2015-12-01
To evaluate the effects of the size of financial bonuses on quality of care and the number of plan offerings in the Medicare Advantage Quality Bonus Payment Demonstration. Publicly available data from CMS from 2009 to 2014 on Medicare Advantage plan quality ratings, the counties in the service area of each plan, and the benchmarks used to construct plan payments. The Medicare Advantage Quality Bonus Payment Demonstration began in 2012. Under the Demonstration, all Medicare Advantage plans were eligible to receive bonus payments based on plan-level quality scores (star ratings). In some counties, plans were eligible to receive bonus payments that were twice as large as in other counties. We used this variation in incentives to evaluate the effects of bonus size on star ratings and the number of plan offerings in the Demonstration using a differences-in-differences identification strategy. We used matching to create a comparison group of counties that did not receive double bonuses but had similar levels of the preintervention outcomes. Results from the difference-in-differences analysis suggest that the receipt of double bonuses was not associated with an increase in star ratings. In the matched sample, the receipt of double bonuses was associated with a statistically insignificant increase of +0.034 (approximately 1 percent) in the average star rating (p > .10, 95 percent CI: -0.015, 0.083). In contrast, the receipt of double bonuses was associated with an increase in the number of plans offered. In the matched sample, the receipt of double bonuses was associated with an overall increase of +0.814 plans (approximately 5.8 percent) (p < .05, 95 percent CI: 0.078, 1.549). We estimate that the double bonuses increased payments by $3.43 billion over the first 3 years of the Demonstration. At great expense to Medicare, double bonuses in the Medicare Advantage Quality Bonus Payment Demonstration were not associated with improved quality but were associated with more plan offerings. © Health Research and Educational Trust.
Interaction of attentional and motor control processes in handwriting.
Brown, T L; Donnenwirth, E E
1990-01-01
The interaction between attentional capacity, motor control processes, and strategic adaptations to changing task demands was investigated in handwriting, a continuous (rather than discrete) skilled performance. Twenty-four subjects completed 12 two-minute handwriting samples under instructions stressing speeded handwriting, normal handwriting, or highly legible handwriting. For half of the writing samples, a concurrent auditory monitoring task was imposed. Subjects copied either familiar (English) or unfamiliar (Latin) passages. Writing speed, legibility ratings, errors in writing and in the secondary auditory task, and a derived measure of the average number of characters held in short-term memory during each sample ("planning unit size") were the dependent variables. The results indicated that the ability to adapt to instructions stressing speed or legibility was substantially constrained by the concurrent listening task and by text familiarity. Interactions between instructions, task concurrence, and text familiarity in the legibility ratings, combined with further analyses of planning unit size, indicated that information throughput from temporary storage mechanisms to motor processes mediated the loss of flexibility effect. Overall, the results suggest that strategic adaptations of a skilled performance to changing task circumstances are sensitive to concurrent attentional demands and that departures from "normal" or "modal" performance require attention.
Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process
Haines, Aaron M.; Zak, Matthew; Hammond, Katie; Scott, J. Michael; Goble, Dale D.; Rachlow, Janet L.
2013-01-01
Simple Summary The objective of our study was to evaluate the mention of uncertainty (i.e., variance) associated with population size estimates within U.S. recovery plans for endangered animals. To do this we reviewed all finalized recovery plans for listed terrestrial vertebrate species. We found that more recent recovery plans reported more estimates of population size and uncertainty. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty. We recommend that updated recovery plans combine uncertainty of population size estimates with a minimum detectable difference to aid in successful recovery. Abstract United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data. PMID:26479531
ERIC Educational Resources Information Center
Spybrook, Jessaca; Lininger, Monica; Cullen, Anne
2011-01-01
The purpose of this study is to extend the work of Spybrook and Raudenbush (2009) and examine how the research designs and sample sizes changed from the planning phase to the implementation phase in the first wave of studies funded by IES. The authors examine the impact of the changes in terms of the changes in the precision of the study from the…
Counting your chickens before they're hatched: power analysis.
Jupiter, Daniel C
2014-01-01
How does an investigator know that he has enough subjects in his study design to have the predicted outcomes appear statistically significant? In this Investigators' Corner I discuss why such planning is necessary, give an intuitive introduction to the calculations needed to determine required sample sizes, and hint at some of the more technical difficulties inherent in this aspect of study planning. Copyright © 2014 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
Intraclass Correlation Values for Planning Group-Randomized Trials in Education
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, E. C.
2007-01-01
Experiments that assign intact groups to treatment conditions are increasingly common in social research. In educational research, the groups assigned are often schools. The design of group-randomized experiments requires knowledge of the intraclass correlation structure to compute statistical power and sample sizes required to achieve adequate…
Optimal Design for Two-Level Random Assignment and Regression Discontinuity Studies
ERIC Educational Resources Information Center
Rhoads, Christopher H.; Dye, Charles
2016-01-01
An important concern when planning research studies is to obtain maximum precision of an estimate of a treatment effect given a budget constraint. When research designs have a "multilevel" or "hierarchical" structure changes in sample size at different levels of the design will impact precision differently. Furthermore, there…
Generalizing the Network Scale-Up Method: A New Estimator for the Size of Hidden Populations*
Feehan, Dennis M.; Salganik, Matthew J.
2018-01-01
The network scale-up method enables researchers to estimate the size of hidden populations, such as drug injectors and sex workers, using sampled social network data. The basic scale-up estimator offers advantages over other size estimation techniques, but it depends on problematic modeling assumptions. We propose a new generalized scale-up estimator that can be used in settings with non-random social mixing and imperfect awareness about membership in the hidden population. Further, the new estimator can be used when data are collected via complex sample designs and from incomplete sampling frames. However, the generalized scale-up estimator also requires data from two samples: one from the frame population and one from the hidden population. In some situations these data from the hidden population can be collected by adding a small number of questions to already planned studies. For other situations, we develop interpretable adjustment factors that can be applied to the basic scale-up estimator. We conclude with practical recommendations for the design and analysis of future studies. PMID:29375167
Evaluation of sampling plans to detect Cry9C protein in corn flour and meal.
Whitaker, Thomas B; Trucksess, Mary W; Giesbrecht, Francis G; Slate, Andrew B; Thomas, Francis S
2004-01-01
StarLink is a genetically modified corn that produces an insecticidal protein, Cry9C. Studies were conducted to determine the variability and Cry9C distribution among sample test results when Cry9C protein was estimated in a bulk lot of corn flour and meal. Emphasis was placed on measuring sampling and analytical variances associated with each step of the test procedure used to measure Cry9C in corn flour and meal. Two commercially available enzyme-linked immunosorbent assay kits were used: one for the determination of Cry9C protein concentration and the other for % StarLink seed. The sampling and analytical variances associated with each step of the Cry9C test procedures were determined for flour and meal. Variances were found to be functions of Cry9C concentration, and regression equations were developed to describe the relationships. Because of the larger particle size, sampling variability associated with cornmeal was about double that for corn flour. For cornmeal, the sampling variance accounted for 92.6% of the total testing variability. The observed sampling and analytical distributions were compared with the Normal distribution. In almost all comparisons, the null hypothesis that the Cry9C protein values were sampled from a Normal distribution could not be rejected at 95% confidence limits. The Normal distribution and the variance estimates were used to evaluate the performance of several Cry9C protein sampling plans for corn flour and meal. Operating characteristic curves were developed and used to demonstrate the effect of increasing sample size on reducing false positives (seller's risk) and false negatives (buyer's risk).
Firefighter Hand Anthropometry and Structural Glove Sizing: A New Perspective
Hsiao, Hongwei; Whitestone, Jennifer; Kau, Tsui-Ying; Hildreth, Brooke
2015-01-01
Objective We evaluated the current use and fit of structural firefighting gloves and developed an improved sizing scheme that better accommodates the U.S. firefighter population. Background Among surveys, 24% to 30% of men and 31% to 62% of women reported experiencing problems with the fit or bulkiness of their structural firefighting gloves. Method An age-, race/ethnicity-, and gender-stratified sample of 863 male and 88 female firefighters across the United States participated in the study. Fourteen hand dimensions relevant to glove design were measured. A cluster analysis of the hand dimensions was performed to explore options for an improved sizing scheme. Results The current national standard structural firefighting glove-sizing scheme underrepresents firefighter hand size range and shape variation. In addition, mismatch between existing sizing specifications and hand characteristics, such as hand dimensions, user selection of glove size, and the existing glove sizing specifications, is significant. An improved glove-sizing plan based on clusters of overall hand size and hand/finger breadth-to-length contrast has been developed. Conclusion This study presents the most up-to-date firefighter hand anthropometry and a new perspective on glove accommodation. The new seven-size system contains narrower variations (standard deviations) for almost all dimensions for each glove size than the current sizing practices. Application The proposed science-based sizing plan for structural firefighting gloves provides a step-forward perspective (i.e., including two women hand model–based sizes and two wide-palm sizes for men) for glove manufacturers to advance firefighter hand protection. PMID:26169309
Family planning use among urban poor women from six cities of Uttar Pradesh, India.
Speizer, Ilene S; Nanda, Priya; Achyut, Pranita; Pillai, Gita; Guilkey, David K
2012-08-01
Family planning has widespread positive impacts for population health and well-being; contraceptive use not only decreases unintended pregnancies and reduces infant and maternal mortality and morbidity, but it is critical to the achievement of Millennium Development Goals. This study uses baseline, representative data from six cities in Uttar Pradesh, India to examine family planning use among the urban poor. Data were collected from about 3,000 currently married women in each city (Allahabad, Agra, Varanasi, Aligarh, Gorakhpur, and Moradabad) for a total sample size of 17,643 women. Participating women were asked about their fertility desires, family planning use, and reproductive health. The survey over-sampled slum residents; this permits in-depth analyses of the urban poor and their family planning use behaviors. Bivariate and multivariate analyses are used to examine the role of wealth and education on family planning use and unmet need for family planning. Across all of the cities, about 50% of women report modern method use. Women in slum areas generally report less family planning use and among those women who use, slum women are more likely to be sterilized than to use other methods, including condoms and hormonal methods. Across all cities, there is a higher unmet need for family planning to limit childbearing than for spacing births. Poorer women are more likely to have an unmet need than richer women in both the slum and non-slum samples; this effect is attenuated when education is included in the analysis. Programs seeking to target the urban poor in Uttar Pradesh and elsewhere in India may be better served to identify the less educated women and target these women with appropriate family planning messages and methods that meet their current and future fertility desire needs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
E.N. Stepanov; I.I. Mel'nikov; V.P. Gridasov
In active production at OAO Magnitogorskii Metallurgicheskii Kombinat (MMK), samples of melt materials were taken during shutdown and during planned repairs at furnaces 1 and 8. In particular, coke was taken from the tuyere zone at different distances from the tuyere tip. The mass of the point samples was 2-15 kg, depending on the sampling zone. The material extracted from each zone underwent magnetic separation and screening by size class. The resulting coke sample was averaged out and divided into parts: one for determining the granulometric composition and mechanical strength; and the other for technical analysis and determination of themore » physicochemical properties of the coke.« less
Costs of Food Safety Investments in the Meat and Poultry Slaughter Industries.
Viator, Catherine L; Muth, Mary K; Brophy, Jenna E; Noyes, Gary
2017-02-01
To develop regulations efficiently, federal agencies need to know the costs of implementing various regulatory alternatives. As the regulatory agency responsible for the safety of meat and poultry products, the U.S. Dept. of Agriculture's Food Safety and Inspection Service is interested in the costs borne by meat and poultry establishments. This study estimated the costs of developing, validating, and reassessing hazard analysis and critical control points (HACCP), sanitary standard operating procedures (SSOP), and sampling plans; food safety training for new employees; antimicrobial equipment and solutions; sanitizing equipment; third-party audits; and microbial tests. Using results from an in-person expert consultation, web searches, and contacts with vendors, we estimated capital equipment, labor, materials, and other costs associated with these investments. Results are presented by establishment size (small and large) and species (beef, pork, chicken, and turkey), when applicable. For example, the cost of developing food safety plans, such as HACCP, SSOP, and sampling plans, can range from approximately $6000 to $87000, depending on the type of plan and establishment size. Food safety training costs from approximately $120 to $2500 per employee, depending on the course and type of employee. The costs of third-party audits range from approximately $13000 to $24000 per audit, and establishments are often subject to multiple audits per year. Knowing the cost of these investments will allow researchers and regulators to better assess the effects of food safety regulations and evaluate cost-effective alternatives. © 2017 Institute of Food Technologists®.
Methods for planning a statistical POD study
NASA Astrophysics Data System (ADS)
Koh, Y.-M.; Meeker, W. Q.
2013-01-01
The most common question asked of a statistician is "How large should my sample be?" In NDE applications, the most common questions asked of a statistician are "How many specimens do I need and what should be the distribution of flaw sizes?" Although some useful general guidelines exist (e.g. in MIK-HDBK-1823) it is possible to use statistical tools to provide more definitive guidelines and to allow comparison among different proposed study plans. One can assess the performance of a proposed POD study plan by obtaining computable expressions for estimation precision. This allows for a quick and easy assessment of tradeoffs and comparison of various alternative plans. We use a signal-response dataset obtained from MIK-HDBK-1823 to illustrate the ideas.
Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.
Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E
2014-02-28
The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Beaty, David W.; Allen, Carlton C.; Bass, Deborah S.; Buxbaum, Karen L.; Campbell, James K.; Lindstrom, David J.; Miller, Sylvia L.; Papanastassiou, Dimitri A.
2009-10-01
It has been widely understood for many years that an essential component of a Mars Sample Return mission is a Sample Receiving Facility (SRF). The purpose of such a facility would be to take delivery of the flight hardware that lands on Earth, open the spacecraft and extract the sample container and samples, and conduct an agreed-upon test protocol, while ensuring strict containment and contamination control of the samples while in the SRF. Any samples that are found to be non-hazardous (or are rendered non-hazardous by sterilization) would then be transferred to long-term curation. Although the general concept of an SRF is relatively straightforward, there has been considerable discussion about implementation planning. The Mars Exploration Program carried out an analysis of the attributes of an SRF to establish its scope, including minimum size and functionality, budgetary requirements (capital cost, operating costs, cost profile), and development schedule. The approach was to arrange for three independent design studies, each led by an architectural design firm, and compare the results. While there were many design elements in common identified by each study team, there were significant differences in the way human operators were to interact with the systems. In aggregate, the design studies provided insight into the attributes of a future SRF and the complex factors to consider for future programmatic planning.
Beaty, David W; Allen, Carlton C; Bass, Deborah S; Buxbaum, Karen L; Campbell, James K; Lindstrom, David J; Miller, Sylvia L; Papanastassiou, Dimitri A
2009-10-01
It has been widely understood for many years that an essential component of a Mars Sample Return mission is a Sample Receiving Facility (SRF). The purpose of such a facility would be to take delivery of the flight hardware that lands on Earth, open the spacecraft and extract the sample container and samples, and conduct an agreed-upon test protocol, while ensuring strict containment and contamination control of the samples while in the SRF. Any samples that are found to be non-hazardous (or are rendered non-hazardous by sterilization) would then be transferred to long-term curation. Although the general concept of an SRF is relatively straightforward, there has been considerable discussion about implementation planning. The Mars Exploration Program carried out an analysis of the attributes of an SRF to establish its scope, including minimum size and functionality, budgetary requirements (capital cost, operating costs, cost profile), and development schedule. The approach was to arrange for three independent design studies, each led by an architectural design firm, and compare the results. While there were many design elements in common identified by each study team, there were significant differences in the way human operators were to interact with the systems. In aggregate, the design studies provided insight into the attributes of a future SRF and the complex factors to consider for future programmatic planning.
Decision-theoretic approach to data acquisition for transit operations planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ritchie, S.G.
The most costly element of transportation planning and modeling activities in the past has usually been that of data acquisition. This is even truer today when the unit costs of data collection are increasing rapidly and at the same time budgets are severely limited by continuing policies of fiscal austerity in the public sector. The overall objectives of this research were to improve the decisions and decision-making capabilities of transit operators or planners in short-range transit planning, and to improve the quality and cost-effectiveness of associated route or corridor-level data collection and service monitoring activities. A new approach was presentedmore » for sequentially updating the parameters of both simple and multiple linear regression models with stochastic regressors, and for determining the expected value of sample information and expected net gain of sampling for associated sample designs. A new approach was also presented for estimating and updating (both spatially and temporally) the parameters of multinomial logit discrete choice models, and for determining associated optimal sample designs for attribute-based and choice-based sampling methods. The approach provides an effective framework for addressing the issue of optimal sampling method and sample size, which to date have been largely unresolved. The application of these methodologies and the feasibility of the decision-theoretic approach was illustrated with a hypothetical case study example.« less
Flagg, Lee A; Sen, Bisakha; Kilgore, Meredith; Locher, Julie L
2014-09-01
To examine the extent to which the gendered division of labour persists within households in the USA in regard to meal planning/preparation and food shopping activities. Secondary analysis of cross-sectional data. 2007-2008 US National Health and Nutrition Examination Survey. Sub-sample of 3195 adults at least 20 years old who had a spouse or partner. Analyses revealed that the majority of women and men reported they shared in both meal planning/preparing and food shopping activities (meal planning/preparation: women 54 % and men 56 %; food shopping: women 60 % and men 57 %). Results from multinomial logistic regression analyses indicated that, compared with men, women were more likely to take primary responsibility than to share this responsibility and less likely to report having no responsibility for these tasks. Gender differences were observed for age/cohort, education and household size. This study may have implications for public health nutritional initiatives and the well-being of families in the USA.
The King Pre-Retirement Checklist: Assessing Differences in Pre-Retirement Planning.
ERIC Educational Resources Information Center
Zitzow, Darryl; King, Donald N.
In an effort to assess the retirement preparedness of Midwestern populations above the age of 28, the King Pre-Retirement Checklist was administered to a sampling of 458 persons randomly selected and proportionally stratified by geographic location and community size. Factors examined were financial, social, family cohesion, mobility/health,…
Sibling Influences on the Career Plans of Male and Female Youth.
ERIC Educational Resources Information Center
Morgan, William R.
This study was conducted to establish the existence, direction, and size of sibling comparison effects on the occupational aspirations of youth. Data were drawn from the youth cohort subsample of the National Longitudinal Surveys of Labor Market Experience, sampling those with expressed occupational aspirations who come from homes with four or…
Estimating the breeding population of long-billed curlew in the United States
Stanley, T.R.; Skagen, S.K.
2007-01-01
Determining population size and long-term trends in population size for species of high concern is a priority of international, national, and regional conservation plans. Long-billed curlews (Numenius americanus) are a species of special concern in North America due to apparent declines in their population. Because long-billed curlews are not adequately monitored by existing programs, we undertook a 2-year study with the goals of 1) determining present long-billed curlew distribution and breeding population size in the United States and 2) providing recommendations for a long-term long-billed curlew monitoring protocol. We selected a stratified random sample of survey routes in 16 western states for sampling in 2004 and 2005, and we analyzed count data from these routes to estimate detection probabilities and abundance. In addition, we evaluated habitat along roadsides to determine how well roadsides represented habitat throughout the sampling units. We estimated there were 164,515 (SE = 42,047) breeding long-billed curlews in 2004, and 109,533 (SE = 31,060) breeding individuals in 2005. These estimates far exceed currently accepted estimates based on expert opinion. We found that habitat along roadsides was representative of long-billed curlew habitat in general. We make recommendations for improving sampling methodology, and we present power curves to provide guidance on minimum sample sizes required to detect trends in abundance.
The change of family size and structure in China.
1992-04-01
With the socioeconomic development and change of people's values, there is some significant change in family size and structure in China. According to the 10% sample data from the 4th Census, 1 family has 3.97 persons on an average, less than the 3rd Census by 0.44 persons; among all types of families, 1-generation families account for 13.5%, 3 generation families for 18.5%, and 2-generation families account for 68%. Instead of large families consisting of several generations and many members, small families has now become a principal family type in China. According to the analysis of the sample data from the 4th Census, the family size is mainly decided by the fertility level in particular regions, and it also depends on the economic development. So family size is usually smaller in more developed regions, such as in Beijing, Tianjin, Zhejiang, Liaoning as well as in Shanghai of which family size is only 3.08 persons; and family size is generally larger in less developed regions such as in Qinghai, Guangxi, Gansu, Xinjiang, and in Tibet of which family size is as large as 5.13 persons. Specialists regard the increase of the number of families as 1 of the major consequences of the economic development, change of living style, and improvement of living standard, Young people now are more inclined to live separately from their parents. However, the increase of the number of families will undoubtedly place more pressure on housing and require more furniture and other durable consumer goods from the market. Therefore, the government and other social sectors related should make corresponding plans and policies to cope with the increase of families and minifying of family size so as to promote family planning and socioeconomic development, and to create better social circumstances for small families. full text
Babalola, Stella; Figueroa, Maria-Elena; Krenn, Susan
2017-11-01
Literature abounds with evidence on the effectiveness of individual mass media interventions on contraceptive use and other health behaviors. There have been, however, very few studies summarizing effect sizes of mass media health communication campaigns in sub-Saharan Africa. In this study, we used meta-analytic techniques to pool data from 47 demographic and health surveys conducted between 2005 and 2015 in 31 sub-Saharan African countries and estimate the prevalence of exposure to family planning-related mass media communication. We also estimated the average effect size of exposure to mass media communication after adjusting for endogeneity. We performed meta-regression to assess the moderating role of selected variables on effect size. On average, 44% of women in sub-Saharan Africa were exposed to family planning-related mass media interventions in the year preceding the survey. Overall, exposure was associated with an effect size equivalent to an odds ratio of 1.93. More recent surveys demonstrated smaller effect sizes than earlier ones, while the effects were larger in lower contraceptive prevalence settings than in higher prevalence ones. The findings have implications for designing communication programs, setting expectations about communication impact, and guiding decisions about sample size estimation for mass media evaluation studies.
Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process.
Haines, Aaron M; Zak, Matthew; Hammond, Katie; Scott, J Michael; Goble, Dale D; Rachlow, Janet L
2013-08-13
United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data.
Sample size calculations for the design of cluster randomized trials: A summary of methodology.
Gao, Fei; Earnest, Arul; Matchar, David B; Campbell, Michael J; Machin, David
2015-05-01
Cluster randomized trial designs are growing in popularity in, for example, cardiovascular medicine research and other clinical areas and parallel statistical developments concerned with the design and analysis of these trials have been stimulated. Nevertheless, reviews suggest that design issues associated with cluster randomized trials are often poorly appreciated and there remain inadequacies in, for example, describing how the trial size is determined and the associated results are presented. In this paper, our aim is to provide pragmatic guidance for researchers on the methods of calculating sample sizes. We focus attention on designs with the primary purpose of comparing two interventions with respect to continuous, binary, ordered categorical, incidence rate and time-to-event outcome variables. Issues of aggregate and non-aggregate cluster trials, adjustment for variation in cluster size and the effect size are detailed. The problem of establishing the anticipated magnitude of between- and within-cluster variation to enable planning values of the intra-cluster correlation coefficient and the coefficient of variation are also described. Illustrative examples of calculations of trial sizes for each endpoint type are included. Copyright © 2015 Elsevier Inc. All rights reserved.
Critical considerations when planning experimental in vivo studies in dental traumatology.
Andreasen, Jens O; Andersson, Lars
2011-08-01
In vivo studies are sometimes needed to understand healing processes after trauma. For several reasons, not the least ethical, such studies have to be carefully planned and important considerations have to be taken into account about suitability of the experimental model, sample size and optimizing the accuracy of the analysis. Several manuscripts of in vivo studies are submitted for publication to Dental Traumatology and rejected because of inadequate design, methodology or insufficient documentation of the results. The authors have substantial experience in experimental in vivo studies of tissue healing in dental traumatology and share their knowledge regarding critical considerations when planning experimental in vivo studies. © 2011 John Wiley & Sons A/S.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aznar, Alexandra; Day, Megan; Doris, Elizabeth
The report analyzes and presents information learned from a sample of 20 cities across the United States, from New York City to Park City, Utah, including a diverse sample of population size, utility type, region, annual greenhouse gas reduction targets, vehicle use, and median household income. The report compares climate, sustainability, and energy plans to better understand where cities are taking energy-related actions and how they are measuring impacts. Some common energy-related goals focus on reducing city-wide carbon emissions, improving energy efficiency across sectors, increasing renewable energy, and increasing biking and walking.
PM2.5 Monitors in New England | Air Quality Planning Unit ...
2017-04-10
The New England states are currently operating a network of 58 ambient PM2.5 air quality monitors that meet EPA's Federal Reference Method (FRM) for PM2.5, which is necessary in order for the resultant data to be used for attainment/non-attainment purposes. These monitors collect particles in the ambient air smaller than 2.5 microns in size on a filter, which is weighed prior and post sampling to produce a 24-hour sample concentration.
Wellek, Stefan
2017-02-28
In current practice, the most frequently applied approach to the handling of ties in the Mann-Whitney-Wilcoxon (MWW) test is based on the conditional distribution of the sum of mid-ranks, given the observed pattern of ties. Starting from this conditional version of the testing procedure, a sample size formula was derived and investigated by Zhao et al. (Stat Med 2008). In contrast, the approach we pursue here is a nonconditional one exploiting explicit representations for the variances of and the covariance between the two U-statistics estimators involved in the Mann-Whitney form of the test statistic. The accuracy of both ways of approximating the sample sizes required for attaining a prespecified level of power in the MWW test for superiority with arbitrarily tied data is comparatively evaluated by means of simulation. The key qualitative conclusions to be drawn from these numerical comparisons are as follows: With the sample sizes calculated by means of the respective formula, both versions of the test maintain the level and the prespecified power with about the same degree of accuracy. Despite the equivalence in terms of accuracy, the sample size estimates obtained by means of the new formula are in many cases markedly lower than that calculated for the conditional test. Perhaps, a still more important advantage of the nonconditional approach based on U-statistics is that it can be also adopted for noninferiority trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Robustness of methods for blinded sample size re-estimation with overdispersed count data.
Schneider, Simon; Schmidli, Heinz; Friede, Tim
2013-09-20
Counts of events are increasingly common as primary endpoints in randomized clinical trials. With between-patient heterogeneity leading to variances in excess of the mean (referred to as overdispersion), statistical models reflecting this heterogeneity by mixtures of Poisson distributions are frequently employed. Sample size calculation in the planning of such trials requires knowledge on the nuisance parameters, that is, the control (or overall) event rate and the overdispersion parameter. Usually, there is only little prior knowledge regarding these parameters in the design phase resulting in considerable uncertainty regarding the sample size. In this situation internal pilot studies have been found very useful and very recently several blinded procedures for sample size re-estimation have been proposed for overdispersed count data, one of which is based on an EM-algorithm. In this paper we investigate the EM-algorithm based procedure with respect to aspects of their implementation by studying the algorithm's dependence on the choice of convergence criterion and find that the procedure is sensitive to the choice of the stopping criterion in scenarios relevant to clinical practice. We also compare the EM-based procedure to other competing procedures regarding their operating characteristics such as sample size distribution and power. Furthermore, the robustness of these procedures to deviations from the model assumptions is explored. We find that some of the procedures are robust to at least moderate deviations. The results are illustrated using data from the US National Heart, Lung and Blood Institute sponsored Asymptomatic Cardiac Ischemia Pilot study. Copyright © 2013 John Wiley & Sons, Ltd.
Clarke-Harris, Dionne; Fleischer, Shelby J
2003-06-01
Although vegetable amaranth, Amaranthus viridis L. and A. dubius Mart. ex Thell., production and economic importance is increasing in diversified peri-urban farms in Jamaica, lepidopteran herbivory is common even during weekly pyrethroid applications. We developed and validated a sampling plan, and investigated insecticides with new modes of action, for a complex of five species (Pyralidae: Spoladea recurvalis (F.), Herpetogramma bipunctalis (F.), Noctuidae: Spodoptera exigua (Hubner), S. frugiperda (J. E. Smith), and S. eridania Stoll). Significant within-plant variation occurred with H. bipunctalis, and a six-leaf sample unit including leaves from the inner and outer whorl was selected to sample all species. Larval counts best fit a negative binomial distribution. We developed a sequential sampling plan using a threshold of one larva per sample unit and the fitted distribution with a k(c) of 0.645. When compared with a fixed plan of 25 plants, sequential sampling recommended the same management decision on 87.5%, additional samples on 9.4%, and gave inaccurate recommendations on 3.1% of 32 farms, while reducing sample size by 46%. Insecticide frequency was reduced 33-60% when management decisions were based on sampled data compared with grower-standards, with no effect on crop damage. Damage remained high or variable (10-46%) with pyrethroid applications. Lepidopteran control was dramatically improved with ecdysone agonists (tebufenozide) or microbial metabolites (spinosyns and emamectin benzoate). This work facilitates resistance management efforts concurrent with the introduction of newer modes of action for lepidopteran control in leafy vegetable production in the Caribbean.
An Educational Software for Simulating the Sample Size of Molecular Marker Experiments
ERIC Educational Resources Information Center
Helms, T. C.; Doetkott, C.
2007-01-01
We developed educational software to show graduate students how to plan molecular marker experiments. These computer simulations give the students feedback on the precision of their experiments. The objective of the software was to show students using a hands-on approach how: (1) environmental variation influences the range of the estimates of the…
Fitts, Douglas A
2017-09-21
The variable criteria sequential stopping rule (vcSSR) is an efficient way to add sample size to planned ANOVA tests while holding the observed rate of Type I errors, α o , constant. The only difference from regular null hypothesis testing is that criteria for stopping the experiment are obtained from a table based on the desired power, rate of Type I errors, and beginning sample size. The vcSSR was developed using between-subjects ANOVAs, but it should work with p values from any type of F test. In the present study, the α o remained constant at the nominal level when using the previously published table of criteria with repeated measures designs with various numbers of treatments per subject, Type I error rates, values of ρ, and four different sample size models. New power curves allow researchers to select the optimal sample size model for a repeated measures experiment. The criteria held α o constant either when used with a multiple correlation that varied the sample size model and the number of predictor variables, or when used with MANOVA with multiple groups and two levels of a within-subject variable at various levels of ρ. Although not recommended for use with χ 2 tests such as the Friedman rank ANOVA test, the vcSSR produces predictable results based on the relation between F and χ 2 . Together, the data confirm the view that the vcSSR can be used to control Type I errors during sequential sampling with any t- or F-statistic rather than being restricted to certain ANOVA designs.
Bridging the gap: a review of dose investigations in paediatric investigation plans
Hampson, Lisa V; Herold, Ralf; Posch, Martin; Saperia, Julia; Whitehead, Anne
2014-01-01
Aims In the EU, development of new medicines for children should follow a prospectively agreed paediatric investigation plan (PIP). Finding the right dose for children is crucial but challenging due to the variability of pharmacokinetics across age groups and the limited sample sizes available. We examined strategies adopted in PIPs to support paediatric dosing recommendations to identify common assumptions underlying dose investigations and the attempts planned to verify them in children. Methods We extracted data from 73 PIP opinions recently adopted by the Paediatric Committee of the European Medicines Agency. These opinions represented 79 medicinal development programmes and comprised a total of 97 dose investigation studies. We identified the design of these dose investigation studies, recorded the analyses planned and determined the criteria used to define target doses. Results Most dose investigation studies are clinical trials (83 of 97) that evaluate a single dosing rule. Sample sizes used to investigate dose are highly variable across programmes, with smaller numbers used in younger children (< 2 years). Many studies (40 of 97) do not pre-specify a target dose criterion. Of those that do, most (33 of 57 studies) guide decisions using pharmacokinetic data alone. Conclusions Common assumptions underlying dose investigation strategies include dose proportionality and similar exposure−response relationships in adults and children. Few development programmes pre-specify steps to verify assumptions in children. There is scope for the use of Bayesian methods as a framework for synthesizing existing information to quantify prior uncertainty about assumptions. This process can inform the design of optimal drug development strategies. PMID:24720849
Francis, Jill J; Johnston, Marie; Robertson, Clare; Glidewell, Liz; Entwistle, Vikki; Eccles, Martin P; Grimshaw, Jeremy M
2010-12-01
In interview studies, sample size is often justified by interviewing participants until reaching 'data saturation'. However, there is no agreed method of establishing this. We propose principles for deciding saturation in theory-based interview studies (where conceptual categories are pre-established by existing theory). First, specify a minimum sample size for initial analysis (initial analysis sample). Second, specify how many more interviews will be conducted without new ideas emerging (stopping criterion). We demonstrate these principles in two studies, based on the theory of planned behaviour, designed to identify three belief categories (Behavioural, Normative and Control), using an initial analysis sample of 10 and stopping criterion of 3. Study 1 (retrospective analysis of existing data) identified 84 shared beliefs of 14 general medical practitioners about managing patients with sore throat without prescribing antibiotics. The criterion for saturation was achieved for Normative beliefs but not for other beliefs or studywise saturation. In Study 2 (prospective analysis), 17 relatives of people with Paget's disease of the bone reported 44 shared beliefs about taking genetic testing. Studywise data saturation was achieved at interview 17. We propose specification of these principles for reporting data saturation in theory-based interview studies. The principles may be adaptable for other types of studies.
Probing the Magnetic Causes of CMEs: Free Magnetic Energy More Important Than Either Size Or Twist
NASA Technical Reports Server (NTRS)
Falconer, D. A.; Moore, R. L.; Gary, G. A.
2006-01-01
To probe the magnetic causes of CMEs, we have examined three types of magnetic measures: size, twist and total nonpotentiality (or total free magnetic energy) of an active region. Total nonpotentiality is roughly the product of size times twist. For predominately bipolar active regions, we have found that total nonpotentiality measures have the strongest correlation with future CME productivity (approx. 75% prediction success rate), while size and twist measures each have a weaker correlation with future CME productivity (approx. 65% prediction success rate) (Falconer, Moore, & Gary, ApJ, 644, 2006). For multipolar active regions, we find that the CME-prediction success rates for total nonpotentiality and size are about the same as for bipolar active regions. We also find that the size measure correlation with CME productivity is nearly all due to the contribution of size to total nonpotentiality. We have a total nonpotentiality measure that can be obtained from a line-of-sight magnetogram of the active region and that is as strongly correlated with CME productivity as are any of our total-nonpotentiality measures from deprojected vector magnetograms. We plan to further expand our sample by using MDI magnetograms of each active region in our sample to determine its total nonpotentiality and size on each day that the active region was within 30 deg. of disk center. The resulting increase in sample size will improve our statistics and allow us to investigate whether the nonpotentiality threshold for CME production is nearly the same or significantly different for multipolar regions than for bipolar regions. In addition, we will investigate the time rates of change of size and total nonpotentiality as additional causes of CME productivity.
Correlational effect size benchmarks.
Bosco, Frank A; Aguinis, Herman; Singh, Kulraj; Field, James G; Pierce, Charles A
2015-03-01
Effect size information is essential for the scientific enterprise and plays an increasingly central role in the scientific process. We extracted 147,328 correlations and developed a hierarchical taxonomy of variables reported in Journal of Applied Psychology and Personnel Psychology from 1980 to 2010 to produce empirical effect size benchmarks at the omnibus level, for 20 common research domains, and for an even finer grained level of generality. Results indicate that the usual interpretation and classification of effect sizes as small, medium, and large bear almost no resemblance to findings in the field, because distributions of effect sizes exhibit tertile partitions at values approximately one-half to one-third those intuited by Cohen (1988). Our results offer information that can be used for research planning and design purposes, such as producing better informed non-nil hypotheses and estimating statistical power and planning sample size accordingly. We also offer information useful for understanding the relative importance of the effect sizes found in a particular study in relationship to others and which research domains have advanced more or less, given that larger effect sizes indicate a better understanding of a phenomenon. Also, our study offers information about research domains for which the investigation of moderating effects may be more fruitful and provide information that is likely to facilitate the implementation of Bayesian analysis. Finally, our study offers information that practitioners can use to evaluate the relative effectiveness of various types of interventions. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Johnston, Lisa G; Hakim, Avi J; Dittrich, Samantha; Burnett, Janet; Kim, Evelyn; White, Richard G
2016-08-01
Reporting key details of respondent-driven sampling (RDS) survey implementation and analysis is essential for assessing the quality of RDS surveys. RDS is both a recruitment and analytic method and, as such, it is important to adequately describe both aspects in publications. We extracted data from peer-reviewed literature published through September, 2013 that reported collected biological specimens using RDS. We identified 151 eligible peer-reviewed articles describing 222 surveys conducted in seven regions throughout the world. Most published surveys reported basic implementation information such as survey city, country, year, population sampled, interview method, and final sample size. However, many surveys did not report essential methodological and analytical information for assessing RDS survey quality, including number of recruitment sites, seeds at start and end, maximum number of waves, and whether data were adjusted for network size. Understanding the quality of data collection and analysis in RDS is useful for effectively planning public health service delivery and funding priorities.
Bovens, M; Csesztregi, T; Franc, A; Nagy, J; Dujourdy, L
2014-01-01
The basic goal in sampling for the quantitative analysis of illicit drugs is to maintain the average concentration of the drug in the material from its original seized state (the primary sample) all the way through to the analytical sample, where the effect of particle size is most critical. The size of the largest particles of different authentic illicit drug materials, in their original state and after homogenisation, using manual or mechanical procedures, was measured using a microscope with a camera attachment. The comminution methods employed included pestle and mortar (manual) and various ball and knife mills (mechanical). The drugs investigated were amphetamine, heroin, cocaine and herbal cannabis. It was shown that comminution of illicit drug materials using these techniques reduces the nominal particle size from approximately 600 μm down to between 200 and 300 μm. It was demonstrated that the choice of 1 g increments for the primary samples of powdered drugs and cannabis resin, which were used in the heterogeneity part of our study (Part I) was correct for the routine quantitative analysis of illicit seized drugs. For herbal cannabis we found that the appropriate increment size was larger. Based on the results of this study we can generally state that: An analytical sample weight of between 20 and 35 mg of an illicit powdered drug, with an assumed purity of 5% or higher, would be considered appropriate and would generate an RSDsampling in the same region as the RSDanalysis for a typical quantitative method of analysis for the most common, powdered, illicit drugs. For herbal cannabis, with an assumed purity of 1% THC (tetrahydrocannabinol) or higher, an analytical sample weight of approximately 200 mg would be appropriate. In Part III we will pull together our homogeneity studies and particle size investigations and use them to devise sampling plans and sample preparations suitable for the quantitative instrumental analysis of the most common illicit drugs. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Bisseleua, D H B; Vidal, Stefan
2011-02-01
The spatio-temporal distribution of Sahlbergella singularis Haglung, a major pest of cacao trees (Theobroma cacao) (Malvaceae), was studied for 2 yr in traditional cacao forest gardens in the humid forest area of southern Cameroon. The first objective was to analyze the dispersion of this insect on cacao trees. The second objective was to develop sampling plans based on fixed levels of precision for estimating S. singularis populations. The following models were used to analyze the data: Taylor's power law, Iwao's patchiness regression, the Nachman model, and the negative binomial distribution. Our results document that Taylor's power law was a better fit for the data than the Iwao and Nachman models. Taylor's b and Iwao's β were both significantly >1, indicating that S. singularis aggregated on specific trees. This result was further supported by the calculated common k of 1.75444. Iwao's α was significantly <0, indicating that the basic distribution component of S. singularis was the individual insect. Comparison of negative binomial (NBD) and Nachman models indicated that the NBD model was appropriate for studying S. singularis distribution. Optimal sample sizes for fixed precision levels of 0.10, 0.15, and 0.25 were estimated with Taylor's regression coefficients. Required sample sizes increased dramatically with increasing levels of precision. This is the first study on S. singularis dispersion in cacao plantations. Sampling plans, presented here, should be a tool for research on population dynamics and pest management decisions of mirid bugs on cacao. © 2011 Entomological Society of America
Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S
2014-09-01
Many papers have introduced adaptive clinical trial methods that allow modifications to the sample size based on interim estimates of treatment effect. There has been extensive commentary on type I error control and efficiency considerations, but little research on estimation after an adaptive hypothesis test. We evaluate the reliability and precision of different inferential procedures in the presence of an adaptive design with pre-specified rules for modifying the sampling plan. We extend group sequential orderings of the outcome space based on the stage at stopping, likelihood ratio statistic, and sample mean to the adaptive setting in order to compute median-unbiased point estimates, exact confidence intervals, and P-values uniformly distributed under the null hypothesis. The likelihood ratio ordering is found to average shorter confidence intervals and produce higher probabilities of P-values below important thresholds than alternative approaches. The bias adjusted mean demonstrates the lowest mean squared error among candidate point estimates. A conditional error-based approach in the literature has the benefit of being the only method that accommodates unplanned adaptations. We compare the performance of this and other methods in order to quantify the cost of failing to plan ahead in settings where adaptations could realistically be pre-specified at the design stage. We find the cost to be meaningful for all designs and treatment effects considered, and to be substantial for designs frequently proposed in the literature. © 2014, The International Biometric Society.
Cup Implant Planning Based on 2-D/3-D Radiographic Pelvis Reconstruction-First Clinical Results.
Schumann, Steffen; Sato, Yoshinobu; Nakanishi, Yuki; Yokota, Futoshi; Takao, Masaki; Sugano, Nobuhiko; Zheng, Guoyan
2015-11-01
In the following, we will present a newly developed X-ray calibration phantom and its integration for 2-D/3-D pelvis reconstruction and subsequent automatic cup planning. Two different planning strategies were applied and evaluated with clinical data. Two different cup planning methods were investigated: The first planning strategy is based on a combined pelvis and cup statistical atlas. Thereby, the pelvis part of the combined atlas is matched to the reconstructed pelvis model, resulting in an optimized cup planning. The second planning strategy analyzes the morphology of the reconstructed pelvis model to determine the best fitting cup implant. The first planning strategy was compared to 3-D CT-based planning. Digitally reconstructed radiographs of THA patients with differently severe pathologies were used to evaluate the accuracy of predicting the cup size and position. Within a discrepancy of one cup size, the size was correctly identified in 100% of the cases for Crowe type I datasets and in 77.8% of the cases for Crowe type II, III, and IV datasets. The second planning strategy was analyzed with respect to the eventually implanted cup size. In seven patients, the estimated cup diameter was correct within one cup size, while the estimation for the remaining five patients differed by two cup sizes. While both planning strategies showed the same prediction rate with a discrepancy of one cup size (87.5%), the prediction of the exact cup size was increased for the statistical atlas-based strategy (56%) in contrast to the anatomically driven approach (37.5%). The proposed approach demonstrated the clinical validity of using 2-D/3-D reconstruction technique for cup planning.
Mainard, D; Barbier, O; Knafo, Y; Belleville, R; Mainard-Simard, L; Gross, J-B
2017-06-01
In total hip arthroplasty (THA), the acetabular cup and femoral stem must be correctly sized and positioned to avoid intraoperative and postoperative complications, achieve good functional outcomes and ensure long-term survival. Current two-dimensional (2D) techniques do not provide sufficient accuracy, while low-dose biplanar X-rays (EOS) had not been assessed in this indication. Therefore, we performed a case-control study to : (1) evaluate the prediction of stem and cup size for a new 3D planning technique (stereoradiographic imaging plus 3D modeling) in comparison to 2D templating on film radiographs and (2) evaluate the accuracy and reproducibility of this 3D technique for preoperative THA planning. Accuracy and reproducibility are better with the 3D vs. 2D method. Stem and cup sizes were retrospectively determined by two senior surgeons, twice, for a total of 31 unilateral primary THA patients in this pilot study, using 3D preplanning software on low-dose biplanar X-rays and with 2D templating on conventional anteroposterior (AP) film radiographs. Patients with a modular neck or dual-mobility prosthesis were excluded. All patients but one had primary osteoarthritis; one following trauma did not have a cup implanted. The retrospectively planned sizes were compared to the sizes selected during surgery, and intraclass coefficients (ICC) calculated. 3D planning predicted stem size more accurately than 2D templating: stem sizes were planned within one size in 26/31 (84%) of cases in 3D versus 21/31 (68%) in 2D (P=0.04). 3D and 2D planning accuracies were not significantly different for cup size: cup sizes were planned within one size in 28/30 (92%) of cases in 3D versus 26/30 (87%) in 2D (P=0.30). ICC for stem size were 0.88 vs. 0.91 for 3D and 2D, respectively. Inter-operator ICCs for cup size were 0.84 vs. 0.71, respectively. Repetitions of the 3D planning were within one size (except one stem), with the majority predicting the same size. Increased accuracy in 3D may be due to the use of actual size (non-magnified) images, and judging fit on AP and lateral images simultaneously. Results for other implant components may differ from those presented. Size selection may improve further with planning experience, based on a feedback loop between planning and surgical execution. Level III. Retrospective case-control study. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
A Model Based Approach to Sample Size Estimation in Recent Onset Type 1 Diabetes
Bundy, Brian; Krischer, Jeffrey P.
2016-01-01
The area under the curve C-peptide following a 2-hour mixed meal tolerance test from 481 individuals enrolled on 5 prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrollment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in Observed vs. Expected calculations to estimate the presumption of benefit in ongoing trials. PMID:26991448
Improvement of sampling plans for Salmonella detection in pooled table eggs by use of real-time PCR.
Pasquali, Frédérique; De Cesare, Alessandra; Valero, Antonio; Olsen, John Emerdhal; Manfreda, Gerardo
2014-08-01
Eggs and egg products have been described as the most critical food vehicles of salmonellosis. The prevalence and level of contamination of Salmonella on table eggs are low, which severely affects the sensitivity of sampling plans applied voluntarily in some European countries, where one to five pools of 10 eggs are tested by the culture based reference method ISO 6579:2004. In the current study we have compared the testing-sensitivity of the reference culture method ISO 6579:2004 and an alternative real-time PCR method on Salmonella contaminated egg-pool of different sizes (4-9 uninfected eggs mixed with one contaminated egg) and contamination levels (10°-10(1), 10(1)-10(2), 10(2)-10(3)CFU/eggshell). Two hundred and seventy samples corresponding to 15 replicates per pool size and inoculum level were tested. At the lowest contamination level real-time PCR detected Salmonella in 40% of contaminated pools vs 12% using ISO 6579. The results were used to estimate the lowest number of sample units needed to be tested in order to have a 95% certainty not falsely to accept a contaminated lot by Monte Carlo simulation. According to this simulation, at least 16 pools of 10 eggs each are needed to be tested by ISO 6579 in order to obtain this confidence level, while the minimum number of pools to be tested was reduced to 8 pools of 9 eggs each, when real-time PCR was applied as analytical method. This result underlines the importance of including analytical methods with higher sensitivity in order to improve the efficiency of sampling and reduce the number of samples to be tested. Copyright © 2013 Elsevier B.V. All rights reserved.
Herath, Samantha; Yap, Elaine
2018-02-01
In diagnosing peripheral pulmonary lesions (PPL), radial endobronchial ultrasound (R-EBUS) is emerging as a safer method in comparison to CT-guided biopsy. Despite the better safety profile, the yield of R-EBUS remains lower (73%) than CT-guided biopsy (90%) due to the smaller size of samples. We adopted a hybrid method by adding cryobiopsy via the R-EBUS Guide Sheath (GS) to produce larger, non-crushed samples to improve diagnostic capability and enhance molecular testing. We report six prospective patients who underwent this procedure in our institution. R-EBUS samples were obtained via conventional sampling methods (needle aspiration, forceps biopsy, and cytology brush), followed by a cryobiopsy. An endobronchial blocker was placed near the planned area of biopsy in advance and inflated post-biopsy to minimize the risk of bleeding in all patients. A chest X-ray was performed 1 h post-procedure. All the PPLs were visualized with R-EBUS. The mean diameter of cryobiopsy samples was twice the size of forceps biopsy samples. In four patients, cryobiopsy samples were superior in size and the number of malignant cells per high power filed and was the preferred sample selected for mutation analysis and molecular testing. There was no pneumothorax or significant bleeding to report. Cryobiopsy samples were consistently larger and were the preferred samples for molecular testing, with an increase in the diagnostic yield and reduction in the need for repeat procedures, without hindering the marked safety profile of R-EBUS. Using an endobronchial blocker improves the safety of this procedure.
A sampling plan for riparian birds of the Lower Colorado River-Final Report
Bart, Jonathan; Dunn, Leah; Leist, Amy
2010-01-01
A sampling plan was designed for the Bureau of Reclamation for selected riparian birds occurring along the Colorado River from Lake Mead to the southerly International Boundary with Mexico. The goals of the sampling plan were to estimate long-term trends in abundance and investigate habitat relationships especially in new habitat being created by the Bureau of Reclamation. The initial objective was to design a plan for the Gila Woodpecker (Melanerpes uropygialis), Arizona Bell's Vireo (Vireo bellii arizonae), Sonoran Yellow Warbler (Dendroica petechia sonorana), Summer Tanager (Piranga rubra), Gilded Flicker (Colaptes chrysoides), and Vermilion Flycatcher (Pyrocephalus rubinus); however, too little data were obtained for the last two species. Recommendations were therefore based on results for the first four species. The study area was partitioned into plots of 7 to 23 hectares. Plot borders were drawn to place the best habitat for the focal species in the smallest number of plots so that survey efforts could be concentrated on these habitats. Double sampling was used in the survey. In this design, a large sample of plots is surveyed a single time, yielding estimates of unknown accuracy, and a subsample is surveyed intensively to obtain accurate estimates. The subsample is used to estimate detection ratios, which are then applied to the results from the extensive survey to obtain unbiased estimates of density and population size. These estimates are then used to estimate long-term trends in abundance. Four sampling plans for selecting plots were evaluated based on a simulation using data from the Breeding Bird Survey. The design with the highest power involved selecting new plots every year. Power with 80 plots surveyed per year was more than 80 percent for three of the four species. Results from the surveys were used to provide recommendations to the Bureau of Reclamation for their surveys of new habitat being created in the study area.
Increasing efficiency of preclinical research by group sequential designs
Piper, Sophie K.; Rex, Andre; Florez-Vargas, Oscar; Karystianis, George; Schneider, Alice; Wellwood, Ian; Siegerink, Bob; Ioannidis, John P. A.; Kimmelman, Jonathan; Dirnagl, Ulrich
2017-01-01
Despite the potential benefits of sequential designs, studies evaluating treatments or experimental manipulations in preclinical experimental biomedicine almost exclusively use classical block designs. Our aim with this article is to bring the existing methodology of group sequential designs to the attention of researchers in the preclinical field and to clearly illustrate its potential utility. Group sequential designs can offer higher efficiency than traditional methods and are increasingly used in clinical trials. Using simulation of data, we demonstrate that group sequential designs have the potential to improve the efficiency of experimental studies, even when sample sizes are very small, as is currently prevalent in preclinical experimental biomedicine. When simulating data with a large effect size of d = 1 and a sample size of n = 18 per group, sequential frequentist analysis consumes in the long run only around 80% of the planned number of experimental units. In larger trials (n = 36 per group), additional stopping rules for futility lead to the saving of resources of up to 30% compared to block designs. We argue that these savings should be invested to increase sample sizes and hence power, since the currently underpowered experiments in preclinical biomedicine are a major threat to the value and predictiveness in this research domain. PMID:28282371
NASA Astrophysics Data System (ADS)
Lucas, G.; Lénárt, C.; Solymosi, J.
2015-08-01
This paper introduces research done on the automatic preparation of remediation plans and navigation data for the precise guidance of heavy machinery in clean-up work after an industrial disaster. The input test data consists of a pollution extent shapefile derived from the processing of hyperspectral aerial survey data from the Kolontár red mud disaster. Three algorithms were developed and the respective scripts were written in Python. The first model aims at drawing a parcel clean-up plan. The model tests four different parcel orientations (0, 90, 45 and 135 degree) and keeps the plan where clean-up parcels are less numerous considering it is an optimal spatial configuration. The second model drifts the clean-up parcel of a work plan both vertically and horizontally following a grid pattern with sampling distance of a fifth of a parcel width and keep the most optimal drifted version; here also with the belief to reduce the final number of parcel features. The last model aims at drawing a navigation line in the middle of each clean-up parcel. The models work efficiently and achieve automatic optimized plan generation (parcels and navigation lines). Applying the first model we demonstrated that depending on the size and geometry of the features of the contaminated area layer, the number of clean-up parcels generated by the model varies in a range of 4% to 38% from plan to plan. Such a significant variation with the resulting feature numbers shows that the optimal orientation identification can result in saving work, time and money in remediation. The various tests demonstrated that the model gains efficiency when 1/ the individual features of contaminated area present a significant orientation with their geometry (features are long), 2/ the size of pollution extent features becomes closer to the size of the parcels (scale effect). The second model shows only 1% difference with the variation of feature number; so this last is less interesting for planning optimization applications. Last model rather simply fulfils the task it was designed for by drawing navigation lines.
Bartlett, Yvonne K; Sheeran, Paschal; Hawley, Mark S
2014-01-01
Purpose The purpose of this study was to identify the behaviour change techniques (BCTs) that are associated with greater effectiveness in smoking cessation interventions for people with chronic obstructive pulmonary disease (COPD). Methods A systematic review and meta-analysis was conducted. Web of Knowledge, CINAHL, EMBASE, PsycINFO, and MEDLINE were searched from the earliest date available to December 2012. Data were extracted and weighted average effect sizes calculated; BCTs used were coded according to an existing smoking cessation-specific BCT taxonomy. Results Seventeen randomized controlled trials (RCTs) were identified that involved a total sample of 7446 people with COPD. The sample-weighted mean quit rate for all RCTs was 13.19%, and the overall sample-weighted effect size was d+ = 0.33. Thirty-seven BCTs were each used in at least three interventions. Four techniques were associated with significantly larger effect sizes: Facilitate action planning/develop treatment plan, Prompt self-recording, Advise on methods of weight control, and Advise on/facilitate use of social support. Three new COPD-specific BCTs were identified, and Linking COPD and smoking was found to result in significantly larger effect sizes. Conclusions Smoking cessation interventions aimed at people with COPD appear to benefit from using techniques focussed on forming detailed plans and self-monitoring. Additional RCTs that use standardized reporting of intervention components and BCTs would be valuable to corroborate findings from the present meta-analysis. Statement of contribution What is already known on this subject? Chronic obstructive pulmonary disease (COPD) is responsible for considerable health and economic burden worldwide, and smoking cessation (SC) is the only known treatment that can slow the decline in lung function experienced. Previous reviews of smoking cessation interventions for this population have established that a combination of pharmacological support and behavioural counselling is most effective. While pharmacological support has been detailed, and effectiveness ranked, the content of behavioural counselling varies between interventions, and it is not clear what the most effective components are. What does this study add? Detailed description of ‘behavioural counselling’ component of SC interventions for people with COPD. Meta-analysis to identify effective behaviour change techniques tailored for this population. Discussion of these findings in the context of designing tailored SC interventions. PMID:24397814
SU-G-TeP3-14: Three-Dimensional Cluster Model in Inhomogeneous Dose Distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, J; Penagaricano, J; Narayanasamy, G
2016-06-15
Purpose: We aim to investigate 3D cluster formation in inhomogeneous dose distribution to search for new models predicting radiation tissue damage and further leading to new optimization paradigm for radiotherapy planning. Methods: The aggregation of higher dose in the organ at risk (OAR) than a preset threshold was chosen as the cluster whose connectivity dictates the cluster structure. Upon the selection of the dose threshold, the fractional density defined as the fraction of voxels in the organ eligible to be part of the cluster was determined according to the dose volume histogram (DVH). A Monte Carlo method was implemented tomore » establish a case pertinent to the corresponding DVH. Ones and zeros were randomly assigned to each OAR voxel with the sampling probability equal to the fractional density. Ten thousand samples were randomly generated to ensure a sufficient number of cluster sets. A recursive cluster searching algorithm was developed to analyze the cluster with various connectivity choices like 1-, 2-, and 3-connectivity. The mean size of the largest cluster (MSLC) from the Monte Carlo samples was taken to be a function of the fractional density. Various OARs from clinical plans were included in the study. Results: Intensive Monte Carlo study demonstrates the inverse relationship between the MSLC and the cluster connectivity as anticipated and the cluster size does not change with fractional density linearly regardless of the connectivity types. An initially-slow-increase to exponential growth transition of the MSLC from low to high density was observed. The cluster sizes were found to vary within a large range and are relatively independent of the OARs. Conclusion: The Monte Carlo study revealed that the cluster size could serve as a suitable index of the tissue damage (percolation cluster) and the clinical outcome of the same DVH might be potentially different.« less
Zheng, Yuanshui
2015-01-01
The main purposes of this study are to: 1) evaluate the accuracy of XiO treatment planning system (TPS) for different dose calculation grid size based on head phantom measurements in uniform scanning proton therapy (USPT); and 2) compare the dosimetric results for various dose calculation grid sizes based on real computed tomography (CT) dataset of pediatric brain cancer treatment plans generated by USPT and intensity‐modulated proton therapy (IMPT) techniques. For phantom study, we have utilized the anthropomorphic head proton phantom provided by Imaging and Radiation Oncology Core (IROC). The imaging, treatment planning, and beam delivery were carried out following the guidelines provided by the IROC. The USPT proton plan was generated in the XiO TPS, and dose calculations were performed for grid size ranged from 1 to 3 mm. The phantom containing thermoluminescent dosimeter (TLDs) and films was irradiated using uniform scanning proton beam. The irradiated TLDs were read by the IROC. The calculated doses from the XiO for different grid sizes were compared to the measured TLD doses provided by the IROC. Gamma evaluation was done by comparing calculated planar dose distribution of 3 mm grid size with measured planar dose distribution. Additionally, IMPT plan was generated based on the same CT dataset of the IROC phantom, and IMPT dose calculations were performed for grid size ranged from 1 to 3 mm. For comparative purpose, additional gamma analysis was done by comparing the planar dose distributions of standard grid size (3 mm) with that of other grid sizes (1, 1.5, 2, and 2.5 mm) for both the USPT and IMPT plans. For patient study, USPT plans of three pediatric brain cancer cases were selected. IMPT plans were generated for each of three pediatric cases. All patient treatment plans (USPT and IMPT) were generated in the XiO TPS for a total dose of 54 Gy (relative biological effectiveness [RBE]). Treatment plans (USPT and IMPT) of each case was recalculated for grid sizes of 1, 1.5, 2, and 2.5 mm; these dosimetric results were then compared with that of 3 mm grid size. Phantom study results: There was no distinct trend exhibiting the dependence of grid size on dose calculation accuracy when calculated point dose of different grid sizes were compared to the measured point (TLD) doses. On average, the calculated point dose was higher than the measured dose by 1.49% and 2.63% for the right and left TLDs, respectively. The gamma analysis showed very minimal differences among planar dose distributions of various grid sizes, with percentage of points meeting gamma index criteria 1% and 1 mm to be from 97.92% to 99.97%. The gamma evaluation using 2% and 2 mm criteria showed both the IMPT and USPT plans have 100% points meeting the criteria. Patient study results: In USPT, there was no very distinct relationship between the absolute difference in mean planning target volume (PTV) dose and grid size, whereas in IMPT, it was found that the decrease in grid size slightly increased the PTV maximum dose and decreased the PTV mean dose and PTV D50%. For the PTV doses, the average differences were up to 0.35 Gy (RBE) and 1.47 Gy (RBE) in the USPT and IMPT plans, respectively. Dependency on grid size was not very clear for the organs at risk (OARs), with average difference ranged from −0.61 Gy (RBE) to 0.53 Gy (RBE) in the USPT plans and from −0.83 Gy (RBE) to 1.39 Gy (RBE) in the IMPT plans. In conclusion, the difference in the calculated point dose between the smallest grid size (1 mm) and the largest grid size (3 mm) in phantom for USPT was typically less than 0.1%. Patient study results showed that the decrease in grid size slightly increased the PTV maximum dose in both the USPT and IMPT plans. However, no distinct trend was obtained between the absolute difference in dosimetric parameter and dose calculation grid size for the OARs. Grid size has a large effect on dose calculation efficiency, and use of 2 mm or less grid size can increase the dose calculation time significantly. It is recommended to use grid size either 2.5 or 3 mm for dose calculations of pediatric brain cancer plans generated by USPT and IMPT techniques in XiO TPS. PACS numbers: 87.55.D‐, 87.55.ne, 87.55.dk PMID:26699310
Bridging the gap: a review of dose investigations in paediatric investigation plans.
Hampson, Lisa V; Herold, Ralf; Posch, Martin; Saperia, Julia; Whitehead, Anne
2014-10-01
In the EU, development of new medicines for children should follow a prospectively agreed paediatric investigation plan (PIP). Finding the right dose for children is crucial but challenging due to the variability of pharmacokinetics across age groups and the limited sample sizes available. We examined strategies adopted in PIPs to support paediatric dosing recommendations to identify common assumptions underlying dose investigations and the attempts planned to verify them in children. We extracted data from 73 PIP opinions recently adopted by the Paediatric Committee of the European Medicines Agency. These opinions represented 79 medicinal development programmes and comprised a total of 97 dose investigation studies. We identified the design of these dose investigation studies, recorded the analyses planned and determined the criteria used to define target doses. Most dose investigation studies are clinical trials (83 of 97) that evaluate a single dosing rule. Sample sizes used to investigate dose are highly variable across programmes, with smaller numbers used in younger children (< 2 years). Many studies (40 of 97) do not pre-specify a target dose criterion. Of those that do, most (33 of 57 studies) guide decisions using pharmacokinetic data alone. Common assumptions underlying dose investigation strategies include dose proportionality and similar exposure-response relationships in adults and children. Few development programmes pre-specify steps to verify assumptions in children. There is scope for the use of Bayesian methods as a framework for synthesizing existing information to quantify prior uncertainty about assumptions. This process can inform the design of optimal drug development strategies. © 2014 The Authors. British Journal of Clinical Pharmacology published by John Wiley & Sons Ltd on behalf of The British Pharmacological Society.
Choice Set Size and Decision-Making: The Case of Medicare Part D Prescription Drug Plans
Bundorf, M. Kate; Szrek, Helena
2013-01-01
Background The impact of choice on consumer decision-making is controversial in U.S. health policy. Objective Our objective was to determine how choice set size influences decision-making among Medicare beneficiaries choosing prescription drug plans. Methods We randomly assigned members of an internet-enabled panel age 65 and over to sets of prescription drug plans of varying sizes (2, 5, 10, and 16) and asked them to choose a plan. Respondents answered questions about the plan they chose, the choice set, and the decision process. We used ordered probit models to estimate the effect of choice set size on the study outcomes. Results Both the benefits of choice, measured by whether the chosen plan is close to the ideal plan, and the costs, measured by whether the respondent found decision-making difficult, increased with choice set size. Choice set size was not associated with the probability of enrolling in any plan. Conclusions Medicare beneficiaries face a tension between not wanting to choose from too many options and feeling happier with an outcome when they have more alternatives. Interventions that reduce cognitive costs when choice sets are large may make this program more attractive to beneficiaries. PMID:20228281
NASA Astrophysics Data System (ADS)
Deshpande, Ruchi; Thuptimdang, Wanwara; DeMarco, John; Liu, Brent J.
2014-03-01
We have built a decision support system that provides recommendations for customizing radiation therapy treatment plans, based on patient models generated from a database of retrospective planning data. This database consists of relevant metadata and information derived from the following DICOM objects - CT images, RT Structure Set, RT Dose and RT Plan. The usefulness and accuracy of such patient models partly depends on the sample size of the learning data set. Our current goal is to increase this sample size by expanding our decision support system into a collaborative framework to include contributions from multiple collaborators. Potential collaborators are often reluctant to upload even anonymized patient files to repositories outside their local organizational network in order to avoid any conflicts with HIPAA Privacy and Security Rules. We have circumvented this problem by developing a tool that can parse DICOM files on the client's side and extract de-identified numeric and text data from DICOM RT headers for uploading to a centralized system. As a result, the DICOM files containing PHI remain local to the client side. This is a novel workflow that results in adding only relevant yet valuable data from DICOM files to the centralized decision support knowledge base in such a way that the DICOM files never leave the contributor's local workstation in a cloud-based environment. Such a workflow serves to encourage clinicians to contribute data for research endeavors by ensuring protection of electronic patient data.
Vanamail, P; Subramanian, S; Srividya, A; Ravi, R; Krishnamoorthy, K; Das, P K
2006-08-01
Lot quality assurance sampling (LQAS) with two-stage sampling plan was applied for rapid monitoring of coverage after every round of mass drug administration (MDA). A Primary Health Centre (PHC) consisting of 29 villages in Thiruvannamalai district, Tamil Nadu was selected as the study area. Two threshold levels of coverage were used: threshold A (maximum: 60%; minimum: 40%) and threshold B (maximum: 80%; minimum: 60%). Based on these thresholds, one sampling plan each for A and B was derived with the necessary sample size and the number of allowable defectives (i.e. defectives mean those who have not received the drug). Using data generated through simple random sampling (SRSI) of 1,750 individuals in the study area, LQAS was validated with the above two sampling plans for its diagnostic and field applicability. Simultaneously, a household survey (SRSH) was conducted for validation and cost-effectiveness analysis. Based on SRSH survey, the estimated coverage was 93.5% (CI: 91.7-95.3%). LQAS with threshold A revealed that by sampling a maximum of 14 individuals and by allowing four defectives, the coverage was >or=60% in >90% of villages at the first stage. Similarly, with threshold B by sampling a maximum of nine individuals and by allowing four defectives, the coverage was >or=80% in >90% of villages at the first stage. These analyses suggest that the sampling plan (14,4,52,25) of threshold A may be adopted in MDA to assess if a minimum coverage of 60% has been achieved. However, to achieve the goal of elimination, the sampling plan (9, 4, 42, 29) of threshold B can identify villages in which the coverage is <80% so that remedial measures can be taken. Cost-effectiveness analysis showed that both options of LQAS are more cost-effective than SRSH to detect a village with a given level of coverage. The cost per village was US dollars 76.18 under SRSH. The cost of LQAS was US dollars 65.81 and 55.63 per village for thresholds A and B respectively. The total financial cost of classifying a village correctly with the given threshold level of LQAS could be reduced by 14% and 26% of the cost of conventional SRSH method.
Role of step size and max dwell time in anatomy based inverse optimization for prostate implants
Manikandan, Arjunan; Sarkar, Biplab; Rajendran, Vivek Thirupathur; King, Paul R.; Sresty, N.V. Madhusudhana; Holla, Ragavendra; Kotur, Sachin; Nadendla, Sujatha
2013-01-01
In high dose rate (HDR) brachytherapy, the source dwell times and dwell positions are vital parameters in achieving a desirable implant dose distribution. Inverse treatment planning requires an optimal choice of these parameters to achieve the desired target coverage with the lowest achievable dose to the organs at risk (OAR). This study was designed to evaluate the optimum source step size and maximum source dwell time for prostate brachytherapy implants using an Ir-192 source. In total, one hundred inverse treatment plans were generated for the four patients included in this study. Twenty-five treatment plans were created for each patient by varying the step size and maximum source dwell time during anatomy-based, inverse-planned optimization. Other relevant treatment planning parameters were kept constant, including the dose constraints and source dwell positions. Each plan was evaluated for target coverage, urethral and rectal dose sparing, treatment time, relative target dose homogeneity, and nonuniformity ratio. The plans with 0.5 cm step size were seen to have clinically acceptable tumor coverage, minimal normal structure doses, and minimum treatment time as compared with the other step sizes. The target coverage for this step size is 87% of the prescription dose, while the urethral and maximum rectal doses were 107.3 and 68.7%, respectively. No appreciable difference in plan quality was observed with variation in maximum source dwell time. The step size plays a significant role in plan optimization for prostate implants. Our study supports use of a 0.5 cm step size for prostate implants. PMID:24049323
The Impact of Desired Family Size Upon Family Planning Practices in Rural East Pakistan
ERIC Educational Resources Information Center
Mosena, Patricia Wimberley
1971-01-01
Results indicated that women whose desired family size is equal to or less than their actual family size have significantly greater frequencies practicing family planning than women whose desired size exceeds their actual size. (Author)
Needs and Challenges of Daily Life for People with Down Syndrome Residing in the City of Rome, Italy
ERIC Educational Resources Information Center
Bertoli, M.; Biasini, G.; Calignano, M. T.; Celani, G.; De Grossi, G.; Digilio, M. C.; Fermariello, C. C.; Loffredo, G.; Luchino, F.; Marchese, A.; Mazotti, S.; Menghi, B.; Razzano, C.; Tiano, C.; Zambon Hobart, A.; Zampino, G.; Zuccala, G.
2011-01-01
Background: Population-based surveys on the quality of life of people with Down syndrome (DS) are difficult to perform because of ethical and legal policies regarding privacy and confidential information, but they are essential for service planning. Little is known about the sample size and variability of quality of life of people with DS living…
Air Force Human Resources Laboratory Annual Report - Fiscal Year 1982.
1983-06-01
test are used a clearer understanding of the impact of sample to assess literacy skills . The use of AFRAT size and curtailment on calibration accuracy...training within determine the feasibility of using newly devised Specialized Undergraduate Pilot Training. tests of psychomotor skills , information...individual skill underscored by unacceptable levels of literacy deficiencies. Empirical job requirements and among recent military enlistees. Plans are
Wang, Zhuoyu; Dendukuri, Nandini; Pai, Madhukar; Joseph, Lawrence
2017-11-01
When planning a study to estimate disease prevalence to a pre-specified precision, it is of interest to minimize total testing cost. This is particularly challenging in the absence of a perfect reference test for the disease because different combinations of imperfect tests need to be considered. We illustrate the problem and a solution by designing a study to estimate the prevalence of childhood tuberculosis in a hospital setting. All possible combinations of 3 commonly used tuberculosis tests, including chest X-ray, tuberculin skin test, and a sputum-based test, either culture or Xpert, are considered. For each of the 11 possible test combinations, 3 Bayesian sample size criteria, including average coverage criterion, average length criterion and modified worst outcome criterion, are used to determine the required sample size and total testing cost, taking into consideration prior knowledge about the accuracy of the tests. In some cases, the required sample sizes and total testing costs were both reduced when more tests were used, whereas, in other examples, lower costs are achieved with fewer tests. Total testing cost should be formally considered when designing a prevalence study.
A comparative appraisal of two equivalence tests for multiple standardized effects.
Shieh, Gwowen
2016-04-01
Equivalence testing is recommended as a better alternative to the traditional difference-based methods for demonstrating the comparability of two or more treatment effects. Although equivalent tests of two groups are widely discussed, the natural extensions for assessing equivalence between several groups have not been well examined. This article provides a detailed and schematic comparison of the ANOVA F and the studentized range tests for evaluating the comparability of several standardized effects. Power and sample size appraisals of the two grossly distinct approaches are conducted in terms of a constraint on the range of the standardized means when the standard deviation of the standardized means is fixed. Although neither method is uniformly more powerful, the studentized range test has a clear advantage in sample size requirements necessary to achieve a given power when the underlying effect configurations are close to the priori minimum difference for determining equivalence. For actual application of equivalence tests and advance planning of equivalence studies, both SAS and R computer codes are available as supplementary files to implement the calculations of critical values, p-values, power levels, and sample sizes. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
A model-based approach to sample size estimation in recent onset type 1 diabetes.
Bundy, Brian N; Krischer, Jeffrey P
2016-11-01
The area under the curve C-peptide following a 2-h mixed meal tolerance test from 498 individuals enrolled on five prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrolment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors, and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in observed versus expected calculations to estimate the presumption of benefit in ongoing trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Statistical considerations in monitoring birds over large areas
Johnson, D.H.
2000-01-01
The proper design of a monitoring effort depends primarily on the objectives desired, constrained by the resources available to conduct the work. Typically, managers have numerous objectives, such as determining abundance of the species, detecting changes in population size, evaluating responses to management activities, and assessing habitat associations. A design that is optimal for one objective will likely not be optimal for others. Careful consideration of the importance of the competing objectives may lead to a design that adequately addresses the priority concerns, although it may not be optimal for any individual objective. Poor design or inadequate sample sizes may result in such weak conclusions that the effort is wasted. Statistical expertise can be used at several stages, such as estimating power of certain hypothesis tests, but is perhaps most useful in fundamental considerations of describing objectives and designing sampling plans.
Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.
Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe
2015-08-01
The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Popple, R; Wu, X; Kraus, J
2016-06-15
Purpose: Patient specific quality assurance of stereotactic radiosurgery (SRS) plans is challenging because of small target sizes and high dose gradients. We compared three detectors for dosimetry of VMAT SRS plans. Methods: The dose at the center of seventeen targets was measured using a synthetic diamond detector (2.2 mm diameter, 1 µm thickness), a 0.007 cm{sup 3} ionization chamber, and radiochromic film. Measurements were made in a PMMA phantom in the clinical geometry – all gantry and table angles were delivered as planned. The diamond and chamber positions were offset by 1 cm from the film plane, so the isocentermore » was shifted accordingly to place the center of the target at the detector of interest. To ensure accurate detector placement, the phantom was positioned using kV images. To account for the shift-induced difference in geometry and differing prescription doses between plans, the measurements were normalized to the expected dose calculated by the treatment planning system. Results: The target sizes ranged from 2.8 mm to 34.8 mm (median 14.8 mm). The mean measurement-to-plan ratios were 1.054, 1.076, and 1.023 for RCF, diamond, and chamber, respectively. The mean difference between the chamber and film was −3.2% and between diamond and film was 2.2%. For targets larger than 15 mm, the mean difference relative to film was −0.8% and 0.1% for chamber and diamond, respectively, whereas for targets smaller than 15 mm, the difference was −5.3% and 4.2% for chamber and diamond, respectively. The difference was significant (p=0.005) using the two-sample Kolmogorov-Smirnov test. Conclusion: The detectors agree for target sizes larger than 15 mm. Relative to film, for smaller targets the diamond detector over-responds, whereas the ionization chamber under-responds. Further work is needed to characterize detector response in modulated SRS fields.« less
Yap, Elaine
2017-01-01
In diagnosing peripheral pulmonary lesions (PPL), radial endobronchial ultrasound (R‐EBUS) is emerging as a safer method in comparison to CT‐guided biopsy. Despite the better safety profile, the yield of R‐EBUS remains lower (73%) than CT‐guided biopsy (90%) due to the smaller size of samples. We adopted a hybrid method by adding cryobiopsy via the R‐EBUS Guide Sheath (GS) to produce larger, non‐crushed samples to improve diagnostic capability and enhance molecular testing. We report six prospective patients who underwent this procedure in our institution. R‐EBUS samples were obtained via conventional sampling methods (needle aspiration, forceps biopsy, and cytology brush), followed by a cryobiopsy. An endobronchial blocker was placed near the planned area of biopsy in advance and inflated post‐biopsy to minimize the risk of bleeding in all patients. A chest X‐ray was performed 1 h post‐procedure. All the PPLs were visualized with R‐EBUS. The mean diameter of cryobiopsy samples was twice the size of forceps biopsy samples. In four patients, cryobiopsy samples were superior in size and the number of malignant cells per high power filed and was the preferred sample selected for mutation analysis and molecular testing. There was no pneumothorax or significant bleeding to report. Cryobiopsy samples were consistently larger and were the preferred samples for molecular testing, with an increase in the diagnostic yield and reduction in the need for repeat procedures, without hindering the marked safety profile of R‐EBUS. Using an endobronchial blocker improves the safety of this procedure. PMID:29321931
Martínez-Ferrer, M T; Campos-Rivela, J M; Verdú, M J
2015-02-01
Seasonal trends and the parasitoid complex of Chinese wax scale (Ceroplastes sinensis) was studied from July 2010 to February 2013. Six commercial citrus groves located in northeastern Spain were sampled fortnightly. Chinese wax scale completed a single annual generation. Egg oviposition started in May and continued until mid-July. Egg hatching began in mid-June, and in the first quarter of August, the maximum percentage of hatched eggs was reached. In the same groves, the parasitoid species of C. sinensis were determined together with their seasonal trends, relative abundance and occurrence on C. sinensis. Four hymenoptera were found parasitizing C. sinensis, mainly on third instars and females: Coccophagus ceroplastae (Aphelinidae), Metaphycus helvolus (Encyrtidae), Scutellista caerulea (Pteromalidae) and Aprostocetus ceroplastae (Eulophidae). The most abundant species was A. ceroplastae, corresponding to 54% of the parasitoids emerged. Coccophagus ceroplastae and M. helvolus represented 19%, whereas S. caerulea comprised 8% of the total. This study is the first published record of C. ceroplastae in Spain and the first record of M. helvolus on C. sinensis in Spain. Concerning the economical thresholds normally used, sampling plans developed for the management of C. sinensis in citrus groves should target population densities of around 12-20% of invaded twigs, equivalent to 0.2-0.5 females per twig. The sample size necessary to achieve the desired integrated pest management precision is 90-160 twigs per grove for the enumerative plan and about 160-245 twigs per grove for the binomial plan.
Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power
Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon
2016-01-01
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%–155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%–71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power. PMID:28479943
Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power.
Miciak, Jeremy; Taylor, W Pat; Stuebing, Karla K; Fletcher, Jack M; Vaughn, Sharon
2016-01-01
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%-155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%-71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power.
Nuts and Bolts - Techniques for Genesis Sample Curation
NASA Technical Reports Server (NTRS)
Burkett, Patti J.; Rodriquez, M. C.; Allton, J. H.
2011-01-01
The Genesis curation staff at NASA Johnson Space Center provides samples and data for analysis to the scientific community, following allocation approval by the Genesis Oversight Committee, a sub-committee of CAPTEM (Curation Analysis Planning Team for Extraterrestrial Materials). We are often asked by investigators within the scientific community how we choose samples to best fit the requirements of the request. Here we will demonstrate our techniques for characterizing samples and satisfying allocation requests. Even with a systematic approach, every allocation is unique. We are also providing updated status of the cataloging and characterization of solar wind collectors as of January 2011. The collection consists of 3721 inventoried samples consisting of a single fragment, or multiple fragments containerized or pressed between post-it notes, jars or vials of various sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzke, Brett D.; Wilson, John E.; Hathaway, J.
2008-02-12
Statistically defensible methods are presented for developing geophysical detector sampling plans and analyzing data for munitions response sites where unexploded ordnance (UXO) may exist. Detection methods for identifying areas of elevated anomaly density from background density are shown. Additionally, methods are described which aid in the choice of transect pattern and spacing to assure with degree of confidence that a target area (TA) of specific size, shape, and anomaly density will be identified using the detection methods. Methods for evaluating the sensitivity of designs to variation in certain parameters are also discussed. Methods presented have been incorporated into the Visualmore » Sample Plan (VSP) software (free at http://dqo.pnl.gov/vsp) and demonstrated at multiple sites in the United States. Application examples from actual transect designs and surveys from the previous two years are demonstrated.« less
Segura-Correa, J C; Domínguez-Díaz, D; Avalos-Ramírez, R; Argaez-Sosa, J
2010-09-01
Knowledge of the intraherd correlation coefficient (ICC) and design (D) effect for infectious diseases could be of interest in sample size calculation and to provide the correct standard errors of prevalence estimates in cluster or two-stage samplings surveys. Information on 813 animals from 48 non-vaccinated cow-calf herds from North-eastern Mexico was used. The ICC for the bovine viral diarrhoea (BVD), infectious bovine rhinotracheitis (IBR), leptospirosis and neosporosis diseases were calculated using a Bayesian approach adjusting for the sensitivity and specificity of the diagnostic tests. The ICC and D values for BVD, IBR, leptospirosis and neosporosis were 0.31 and 5.91, 0.18 and 3.88, 0.22 and 4.53, and 0.11 and 2.68, respectively. The ICC and D values were different from 0 and D greater than 1, therefore large sample sizes are required to obtain the same precision in prevalence estimates than for a random simple sampling design. The report of ICC and D values is of great help in planning and designing two-stage sampling studies. 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Beddow, B.; Roberts, C.; Rankin, J.; Bloch, A.; Peizer, J.
1981-01-01
The National Accident Sampling System (NASS) is described. The study area discussed is one of the original ten sites selected for NASS implementation. In addition to collecting data from the field, the original ten sites address questions of feasibility of the plan, projected results of the data collection effort, and specific operational topics, e.g., team size, sampling requirements, training approaches, quality control procedures, and field techniques. Activities and results of the first three years of the project, for both major tasks (establishment and operation) are addressed. Topics include: study area documentation; team description, function and activities; problems and solutions; and recommendations.
A Statistical Analysis Plan to Support the Joint Forward Area Air Defense Test.
1984-08-02
hy estahlishing a specific significance level prior to performing the statistical test (traditionally a levels are set at .01 or .05). What is often...undesirable increase in 8. For constant a levels , the power (I - 8) of a statistical test can he increased by Increasing the sample size of the test. fRef...ANOVA Iparison Test on MOP I=--ferences Exist AmongF "Upon MOP "A" Factor I "A" Factor I 1MOP " A " Levels ? I . I I I _ _ ________ IPerform k-Sample Com- I
Sulaberidze, Lela; Mirzazadeh, Ali; Chikovani, Ivdity; Shengelia, Natia; Tsereteli, Nino; Gotsadze, George
2016-01-01
An accurate estimation of the population size of men who have sex with men (MSM) is critical to the success of HIV program planning and to monitoring of the response to epidemic as a whole, but is quite often missing. In this study, our aim was to estimate the population size of MSM in Tbilisi, Georgia and compare it with other estimates in the region. In the absence of a gold standard for estimating the population size of MSM, this study reports a range of methods, including network scale-up, mobile/web apps multiplier, service and unique object multiplier, network-based capture-recapture, Handcock RDS-based and Wisdom of Crowds methods. To apply all these methods, two surveys were conducted: first, a household survey among 1,015 adults from the general population, and second, a respondent driven sample of 210 MSM. We also conducted a literature review of MSM size estimation in Eastern European and Central Asian countries. The median population size of MSM generated from all previously mentioned methods was estimated to be 5,100 (95% Confidence Interval (CI): 3,243~9,088). This corresponds to 1.42% (95%CI: 0.9%~2.53%) of the adult male population in Tbilisi. Our size estimates of the MSM population (1.42% (95%CI: 0.9%~2.53%) of the adult male population in Tbilisi) fall within ranges reported in other Eastern European and Central Asian countries. These estimates can provide valuable information for country level HIV prevention program planning and evaluation. Furthermore, we believe, that our results will narrow the gap in data availability on the estimates of the population size of MSM in the region.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu Huijun; Gordon, J. James; Siebers, Jeffrey V.
2011-02-15
Purpose: A dosimetric margin (DM) is the margin in a specified direction between a structure and a specified isodose surface, corresponding to a prescription or tolerance dose. The dosimetric margin distribution (DMD) is the distribution of DMs over all directions. Given a geometric uncertainty model, representing inter- or intrafraction setup uncertainties or internal organ motion, the DMD can be used to calculate coverage Q, which is the probability that a realized target or organ-at-risk (OAR) dose metric D{sub v} exceeds the corresponding prescription or tolerance dose. Postplanning coverage evaluation quantifies the percentage of uncertainties for which target and OAR structuresmore » meet their intended dose constraints. The goal of the present work is to evaluate coverage probabilities for 28 prostate treatment plans to determine DMD sampling parameters that ensure adequate accuracy for postplanning coverage estimates. Methods: Normally distributed interfraction setup uncertainties were applied to 28 plans for localized prostate cancer, with prescribed dose of 79.2 Gy and 10 mm clinical target volume to planning target volume (CTV-to-PTV) margins. Using angular or isotropic sampling techniques, dosimetric margins were determined for the CTV, bladder and rectum, assuming shift invariance of the dose distribution. For angular sampling, DMDs were sampled at fixed angular intervals {omega} (e.g., {omega}=1 deg., 2 deg., 5 deg., 10 deg., 20 deg.). Isotropic samples were uniformly distributed on the unit sphere resulting in variable angular increments, but were calculated for the same number of sampling directions as angular DMDs, and accordingly characterized by the effective angular increment {omega}{sub eff}. In each direction, the DM was calculated by moving the structure in radial steps of size {delta}(=0.1,0.2,0.5,1 mm) until the specified isodose was crossed. Coverage estimation accuracy {Delta}Q was quantified as a function of the sampling parameters {omega} or {omega}{sub eff} and {delta}. Results: The accuracy of coverage estimates depends on angular and radial DMD sampling parameters {omega} or {omega}{sub eff} and {delta}, as well as the employed sampling technique. Target |{Delta}Q|<1% and OAR |{Delta}Q|<3% can be achieved with sampling parameters {omega} or {omega}{sub eff}=20 deg., {delta}=1 mm. Better accuracy (target |{Delta}Q|<0.5% and OAR |{Delta}Q|<{approx}1%) can be achieved with {omega} or {omega}{sub eff}=10 deg., {delta}=0.5 mm. As the number of sampling points decreases, the isotropic sampling method maintains better accuracy than fixed angular sampling. Conclusions: Coverage estimates for post-planning evaluation are essential since coverage values of targets and OARs often differ from the values implied by the static margin-based plans. Finer sampling of the DMD enables more accurate assessment of the effect of geometric uncertainties on coverage estimates prior to treatment. DMD sampling with {omega} or {omega}{sub eff}=10 deg. and {delta}=0.5 mm should be adequate for planning purposes.« less
Xu, Huijun; Gordon, J James; Siebers, Jeffrey V
2011-02-01
A dosimetric margin (DM) is the margin in a specified direction between a structure and a specified isodose surface, corresponding to a prescription or tolerance dose. The dosimetric margin distribution (DMD) is the distribution of DMs over all directions. Given a geometric uncertainty model, representing inter- or intrafraction setup uncertainties or internal organ motion, the DMD can be used to calculate coverage Q, which is the probability that a realized target or organ-at-risk (OAR) dose metric D, exceeds the corresponding prescription or tolerance dose. Postplanning coverage evaluation quantifies the percentage of uncertainties for which target and OAR structures meet their intended dose constraints. The goal of the present work is to evaluate coverage probabilities for 28 prostate treatment plans to determine DMD sampling parameters that ensure adequate accuracy for postplanning coverage estimates. Normally distributed interfraction setup uncertainties were applied to 28 plans for localized prostate cancer, with prescribed dose of 79.2 Gy and 10 mm clinical target volume to planning target volume (CTV-to-PTV) margins. Using angular or isotropic sampling techniques, dosimetric margins were determined for the CTV, bladder and rectum, assuming shift invariance of the dose distribution. For angular sampling, DMDs were sampled at fixed angular intervals w (e.g., w = 1 degree, 2 degrees, 5 degrees, 10 degrees, 20 degrees). Isotropic samples were uniformly distributed on the unit sphere resulting in variable angular increments, but were calculated for the same number of sampling directions as angular DMDs, and accordingly characterized by the effective angular increment omega eff. In each direction, the DM was calculated by moving the structure in radial steps of size delta (=0.1, 0.2, 0.5, 1 mm) until the specified isodose was crossed. Coverage estimation accuracy deltaQ was quantified as a function of the sampling parameters omega or omega eff and delta. The accuracy of coverage estimates depends on angular and radial DMD sampling parameters omega or omega eff and delta, as well as the employed sampling technique. Target deltaQ/ < l% and OAR /deltaQ/ < 3% can be achieved with sampling parameters omega or omega eef = 20 degrees, delta =1 mm. Better accuracy (target /deltaQ < 0.5% and OAR /deltaQ < approximately 1%) can be achieved with omega or omega eff = 10 degrees, delta = 0.5 mm. As the number of sampling points decreases, the isotropic sampling method maintains better accuracy than fixed angular sampling. Coverage estimates for post-planning evaluation are essential since coverage values of targets and OARs often differ from the values implied by the static margin-based plans. Finer sampling of the DMD enables more accurate assessment of the effect of geometric uncertainties on coverage estimates prior to treatment. DMD sampling with omega or omega eff = 10 degrees and delta = 0.5 mm should be adequate for planning purposes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, C; Schultheiss, T
Purpose: In this study, we aim to evaluate the effect of dose grid size on the accuracy of calculated dose for small lesions in intracranial stereotactic radiosurgery (SRS), and to verify dose calculation accuracy with radiochromic film dosimetry. Methods: 15 intracranial lesions from previous SRS patients were retrospectively selected for this study. The planning target volume (PTV) ranged from 0.17 to 2.3 cm{sup 3}. A commercial treatment planning system was used to generate SRS plans using the volumetric modulated arc therapy (VMAT) technique using two arc fields. Two convolution-superposition-based dose calculation algorithms (Anisotropic Analytical Algorithm and Acuros XB algorithm) weremore » used to calculate volume dose distribution with dose grid size ranging from 1 mm to 3 mm with 0.5 mm step size. First, while the plan monitor units (MU) were kept constant, PTV dose variations were analyzed. Second, with 95% of the PTV covered by the prescription dose, variations of the plan MUs as a function of dose grid size were analyzed. Radiochomic films were used to compare the delivered dose and profile with the calculated dose distribution with different dose grid sizes. Results: The dose to the PTV, in terms of the mean dose, maximum, and minimum dose, showed steady decrease with increasing dose grid size using both algorithms. With 95% of the PTV covered by the prescription dose, the total MU increased with increasing dose grid size in most of the plans. Radiochromic film measurements showed better agreement with dose distributions calculated with 1-mm dose grid size. Conclusion: Dose grid size has significant impact on calculated dose distribution in intracranial SRS treatment planning with small target volumes. Using the default dose grid size could lead to under-estimation of delivered dose. A small dose grid size should be used to ensure calculation accuracy and agreement with QA measurements.« less
Prevalence and predictors of advance directives in Australia.
White, B; Tilse, C; Wilson, J; Rosenman, L; Strub, T; Feeney, R; Silvester, W
2014-10-01
Advance care planning is regarded as integral to better patient outcomes, yet little is known about the prevalence of advance directives (AD) in Australia. To determine the prevalence of AD in the Australian population. A national telephone survey about estate and advance planning. Sample was stratified by age (18-45 and >45 years) and quota sampling occurred based on population size in each state and territory. Fourteen per cent of the Australian population has an AD. There is state variation with people from South Australia and Queensland more likely to have an AD than people from other states. Will making and particularly completion of a financial enduring power of attorney are associated with higher rates of AD completion. Standard demographic variables were of limited use in predicting whether a person would have an AD. Despite efforts to improve uptake of advance care planning (including AD), barriers remain. One likely trigger for completing an AD and advance care planning is undertaking a wider future planning process (e.g. making a will or financial enduring power of attorney). This presents opportunities to increase advance care planning, but steps are needed to ensure that planning, which occurs outside the health system, is sufficiently informed and supported by health information so that it is useful in the clinical setting. Variations by state could also suggest that redesign of regulatory frameworks (such as a user-friendly and well-publicised form backed by statute) may help improve uptake of AD. © 2014 The Authors; Internal Medicine Journal © 2014 Royal Australasian College of Physicians.
What determines real-world meal size? Evidence for pre-meal planning.
Fay, Stephanie H; Ferriday, Danielle; Hinton, Elanor C; Shakeshaft, Nicholas G; Rogers, Peter J; Brunstrom, Jeffrey M
2011-04-01
The customary approach to the study of meal size suggests that 'events' occurring during a meal lead to its termination. Recent research, however, suggests that a number of decisions are made before eating commences that may affect meal size. The present study sought to address three key research questions around meal size: the extent to which plate-cleaning occurs; prevalence of pre-meal planning and its influence on meal size; and the effect of within-meal experiences, notably the development of satiation. To address these, a large-cohort internet-based questionnaire was developed. Results showed that plate-cleaning occurred at 91% of meals, and was planned from the outset in 92% of these cases. A significant relationship between plate-cleaning and meal planning was observed. Pre-meal plans were resistant to modification over the course of the meal: only 18% of participants reported consumption that deviated from expected. By contrast, 28% reported continuing eating beyond satiation, and 57% stated that they could have eaten more at the end of the meal. Logistic regression confirmed pre-meal planning as the most important predictor of consumption. Together, our findings demonstrate the importance of meal planning as a key determinant of meal size and energy intake. Copyright © 2011 Elsevier Ltd. All rights reserved.
Kendall, Carl; Kerr, Ligia R F S; Gondim, Rogerio C; Werneck, Guilherme L; Macena, Raimunda Hermelinda Maia; Pontes, Marta Kerr; Johnston, Lisa G; Sabin, Keith; McFarland, Willi
2008-07-01
Obtaining samples of populations at risk for HIV challenges surveillance, prevention planning, and evaluation. Methods used include snowball sampling, time location sampling (TLS), and respondent-driven sampling (RDS). Few studies have made side-by-side comparisons to assess their relative advantages. We compared snowball, TLS, and RDS surveys of men who have sex with men (MSM) in Forteleza, Brazil, with a focus on the socio-economic status (SES) and risk behaviors of the samples to each other, to known AIDS cases and to the general population. RDS produced a sample with wider inclusion of lower SES than snowball sampling or TLS-a finding of health significance given the majority of AIDS cases reported among MSM in the state were low SES. RDS also achieved the sample size faster and at lower cost. For reasons of inclusion and cost-efficiency, RDS is the sampling methodology of choice for HIV surveillance of MSM in Fortaleza.
Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA
Kelly, Brendan J.; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D.; Collman, Ronald G.; Bushman, Frederic D.; Li, Hongzhe
2015-01-01
Motivation: The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence–absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. Results: We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. Availability and implementation: http://github.com/brendankelly/micropower. Contact: brendank@mail.med.upenn.edu or hongzhe@upenn.edu PMID:25819674
Sampling design for the 1980 commercial and multifamily residential building survey
NASA Astrophysics Data System (ADS)
Bowen, W. M.; Olsen, A. R.; Nieves, A. L.
1981-06-01
The extent to which new building design practices comply with the proposed 1980 energy budget levels for commercial and multifamily residential building designs (DEB-80) can be assessed by: (1) identifying small number of building types which account for the majority of commercial buildings constructed in the U.S.A.; (2) conducting a separate survey for each building type; and (3) including only buildings designed during 1980. For each building, the design energy consumption (DEC-80) will be determined by the DOE2.1 computer program. The quantity X = (DEC-80 - DEB-80). These X quantities can then be used to compute sample statistics. Inferences about nationwide compliance with DEB-80 may then be made for each building type. Details of the population, sampling frame, stratification, sample size, and implementation of the sampling plan are provided.
Beno, Sarah M; Stasiewicz, Matthew J; Andrus, Alexis D; Ralyea, Robert D; Kent, David J; Martin, Nicole H; Wiedmann, Martin; Boor, Kathryn J
2016-12-01
Pathogen environmental monitoring programs (EMPs) are essential for food processing facilities of all sizes that produce ready-to-eat food products exposed to the processing environment. We developed, implemented, and evaluated EMPs targeting Listeria spp. and Salmonella in nine small cheese processing facilities, including seven farmstead facilities. Individual EMPs with monthly sample collection protocols were designed specifically for each facility. Salmonella was detected in only one facility, with likely introduction from the adjacent farm indicated by pulsed-field gel electrophoresis data. Listeria spp. were isolated from all nine facilities during routine sampling. The overall Listeria spp. (other than Listeria monocytogenes ) and L. monocytogenes prevalences in the 4,430 environmental samples collected were 6.03 and 1.35%, respectively. Molecular characterization and subtyping data suggested persistence of a given Listeria spp. strain in seven facilities and persistence of L. monocytogenes in four facilities. To assess routine sampling plans, validation sampling for Listeria spp. was performed in seven facilities after at least 6 months of routine sampling. This validation sampling was performed by independent individuals and included collection of 50 to 150 samples per facility, based on statistical sample size calculations. Two of the facilities had a significantly higher frequency of detection of Listeria spp. during the validation sampling than during routine sampling, whereas two other facilities had significantly lower frequencies of detection. This study provides a model for a science- and statistics-based approach to developing and validating pathogen EMPs.
Design Constructibility Reviews.
1987-01-01
specifications for base and sub-base courses, and wearing course.I Item 21 - Has provision been made in the specifications for positive control of the temperature...of the bituminous material? Item 22 - Test results on samples of asphalt , aggregate, sand and mix should be obtained from the plant prior to placing...in the * drawings. Item 3 - Make sure that stud types, sizes and pacinqs are spelled out in the plans and sc-i catr-. Item 4 - All welders that will
Silvestre, Ellida de Aguiar; Schwarcz, Kaiser Dias; Grando, Carolina; de Campos, Jaqueline Bueno; Sujii, Patricia Sanae; Tambarussi, Evandro Vagner; Macrini, Camila Menezes Trindade; Pinheiro, José Baldin; Brancalion, Pedro Henrique Santin; Zucchi, Maria Imaculada
2018-03-16
The reproductive system of a tree species has substantial impact on genetic diversity and structure within and among natural populations. Such information, should be considered when planning tree planting for forest restoration. Here, we describe the mating system and genetic diversity of an overexploited Neotropical tree, Myroxylon peruiferum L.f. (Fabaceae) sampled from a forest remnant (10 seed trees and 200 seeds) and assess whether the effective population size of nursery-grown seedlings (148 seedlings) is sufficient to prevent inbreeding depression in reintroduced populations. Genetic analyses were performed based on 8 microsatellite loci. M. peruiferum presented a mixed mating system with evidence of biparental inbreeding (t^m-t^s = 0.118). We found low levels of genetic diversity for M. peruiferum species (allelic richness: 1.40 to 4.82; expected heterozygosity: 0.29 to 0.52). Based on Ne(v) within progeny, we suggest a sample size of 47 seed trees to achieve an effective population size of 100. The effective population sizes for the nursery-grown seedlings were much smaller Ne = 27.54-34.86) than that recommended for short term Ne ≥ 100) population conservation. Therefore, to obtain a reasonable genetic representation of native tree species and prevent problems associated with inbreeding depression, seedling production for restoration purposes may require a much larger sampling effort than is currently used, a problem that is further complicated by species with a mixed mating system. This study emphasizes the need to integrate species reproductive biology into seedling production programs and connect conservation genetics with ecological restoration.
Pezzoli, Lorenzo; Andrews, Nick; Ronveaux, Olivier
2010-05-01
Vaccination programmes targeting disease elimination aim to achieve very high coverage levels (e.g. 95%). We calculated the precision of different clustered lot quality assurance sampling (LQAS) designs in computer-simulated surveys to provide local health officers in the field with preset LQAS plans to simply and rapidly assess programmes with high coverage targets. We calculated sample size (N), decision value (d) and misclassification errors (alpha and beta) of several LQAS plans by running 10 000 simulations. We kept the upper coverage threshold (UT) at 90% or 95% and decreased the lower threshold (LT) progressively by 5%. We measured the proportion of simulations with < or =d individuals unvaccinated or lower if the coverage was set at the UT (pUT) to calculate beta (1-pUT) and the proportion of simulations with >d unvaccinated individuals if the coverage was LT% (pLT) to calculate alpha (1-pLT). We divided N in clusters (between 5 and 10) and recalculated the errors hypothesising that the coverage would vary in the clusters according to a binomial distribution with preset standard deviations of 0.05 and 0.1 from the mean lot coverage. We selected the plans fulfilling these criteria: alpha < or = 5% beta < or = 20% in the unclustered design; alpha < or = 10% beta < or = 25% when the lots were divided in five clusters. When the interval between UT and LT was larger than 10% (e.g. 15%), we were able to select precise LQAS plans dividing the lot in five clusters with N = 50 (5 x 10) and d = 4 to evaluate programmes with 95% coverage target and d = 7 to evaluate programmes with 90% target. These plans will considerably increase the feasibility and the rapidity of conducting the LQAS in the field.
Dziak, John J.; Nahum-Shani, Inbal; Collins, Linda M.
2012-01-01
Factorial experimental designs have many potential advantages for behavioral scientists. For example, such designs may be useful in building more potent interventions, by helping investigators to screen several candidate intervention components simultaneously and decide which are likely to offer greater benefit before evaluating the intervention as a whole. However, sample size and power considerations may challenge investigators attempting to apply such designs, especially when the population of interest is multilevel (e.g., when students are nested within schools, or employees within organizations). In this article we examine the feasibility of factorial experimental designs with multiple factors in a multilevel, clustered setting (i.e., of multilevel multifactor experiments). We conduct Monte Carlo simulations to demonstrate how design elements such as the number of clusters, the number of lower-level units, and the intraclass correlation affect power. Our results suggest that multilevel, multifactor experiments are feasible for factor-screening purposes, because of the economical properties of complete and fractional factorial experimental designs. We also discuss resources for sample size planning and power estimation for multilevel factorial experiments. These results are discussed from a resource management perspective, in which the goal is to choose a design that maximizes the scientific benefit using the resources available for an investigation. PMID:22309956
Dziak, John J; Nahum-Shani, Inbal; Collins, Linda M
2012-06-01
Factorial experimental designs have many potential advantages for behavioral scientists. For example, such designs may be useful in building more potent interventions by helping investigators to screen several candidate intervention components simultaneously and to decide which are likely to offer greater benefit before evaluating the intervention as a whole. However, sample size and power considerations may challenge investigators attempting to apply such designs, especially when the population of interest is multilevel (e.g., when students are nested within schools, or when employees are nested within organizations). In this article, we examine the feasibility of factorial experimental designs with multiple factors in a multilevel, clustered setting (i.e., of multilevel, multifactor experiments). We conduct Monte Carlo simulations to demonstrate how design elements-such as the number of clusters, the number of lower-level units, and the intraclass correlation-affect power. Our results suggest that multilevel, multifactor experiments are feasible for factor-screening purposes because of the economical properties of complete and fractional factorial experimental designs. We also discuss resources for sample size planning and power estimation for multilevel factorial experiments. These results are discussed from a resource management perspective, in which the goal is to choose a design that maximizes the scientific benefit using the resources available for an investigation. (c) 2012 APA, all rights reserved
Code of Federal Regulations, 2013 CFR
2013-10-01
... research, planning, development, design, construction, alteration, or repair of real property; and (3..., evaluations, consultations, comprehensive planning, program management, conceptual designs, plans and... include the value and size of the modification and the comparative value and size of the final product...
Code of Federal Regulations, 2014 CFR
2014-10-01
... research, planning, development, design, construction, alteration, or repair of real property; and (3..., evaluations, consultations, comprehensive planning, program management, conceptual designs, plans and... minor include the value and size of the modification and the comparative value and size of the final...
Truck size and weight enforcement technologies : implementation plan
DOT National Transportation Integrated Search
2009-06-01
The purpose of this Implementation Plan is to recommend strategies to encourage the deployment of roadside technologies to improve truck size and weight enforcement in the United States. The plan includes strategies that State practitioners can use t...
Statistical inference for extended or shortened phase II studies based on Simon's two-stage designs.
Zhao, Junjun; Yu, Menggang; Feng, Xi-Ping
2015-06-07
Simon's two-stage designs are popular choices for conducting phase II clinical trials, especially in the oncology trials to reduce the number of patients placed on ineffective experimental therapies. Recently Koyama and Chen (2008) discussed how to conduct proper inference for such studies because they found that inference procedures used with Simon's designs almost always ignore the actual sampling plan used. In particular, they proposed an inference method for studies when the actual second stage sample sizes differ from planned ones. We consider an alternative inference method based on likelihood ratio. In particular, we order permissible sample paths under Simon's two-stage designs using their corresponding conditional likelihood. In this way, we can calculate p-values using the common definition: the probability of obtaining a test statistic value at least as extreme as that observed under the null hypothesis. In addition to providing inference for a couple of scenarios where Koyama and Chen's method can be difficult to apply, the resulting estimate based on our method appears to have certain advantage in terms of inference properties in many numerical simulations. It generally led to smaller biases and narrower confidence intervals while maintaining similar coverages. We also illustrated the two methods in a real data setting. Inference procedures used with Simon's designs almost always ignore the actual sampling plan. Reported P-values, point estimates and confidence intervals for the response rate are not usually adjusted for the design's adaptiveness. Proper statistical inference procedures should be used.
Results from a national survey on chronic care management by health plans.
Mattke, Soeren; Higgins, Aparna; Brook, Robert
2015-05-01
The growing burden of chronic disease necessitates innovative approaches to help patients and to ensure the sustainability of our healthcare system. Health plans have introduced chronic care management models, but systematic data on the type and prevalence of different approaches are lacking. Our goal was to conduct a systematic examination of chronic care management programs offered by health plans in the commercial market (ie, in products sold to employers and individuals. We undertook a national survey of a representative sample of health plans (70 plans, 36% response rate) and 6 case studies on health plans' programs to improve chronic care in the commercial market. The data underwent descriptive and bivariate analyses. All plans, regardless of size, location, and ownership, offer chronic care management programs, which identify eligible members from claims data and match them to interventions based on overall risk and specific care gaps. Plans then report information on care gaps to providers and offer self-management support to their members. While internal evaluations suggest that the interventions improve care and reduce cost, plans report difficulties in engaging members and providers. To overcome those obstacles, plans are integrating their programs into provider work flow, collaborating with providers on care redesign and leveraging patient support technologies. Our study shows that chronic care management programs have become a standard component of the overall approach used by health plans to manage the health of their members.
A model for effective planning of SME support services.
Rakićević, Zoran; Omerbegović-Bijelović, Jasmina; Lečić-Cvetković, Danica
2016-02-01
This paper presents a model for effective planning of support services for small and medium-sized enterprises (SMEs). The idea is to scrutinize and measure the suitability of support services in order to give recommendations for the improvement of a support planning process. We examined the applied support services and matched them with the problems and needs of SMEs, based on the survey conducted in 2013 on a sample of 336 SMEs in Serbia. We defined and analysed the five research questions that refer to support services, their consistency with the SMEs' problems and needs, and the relation between the given support and SMEs' success. The survey results have shown a statistically significant connection between them. Based on this result, we proposed an eight-phase model as a method for the improvement of support service planning for SMEs. This model helps SMEs to plan better their requirements in terms of support; government and administration bodies at all levels and organizations that provide support services to understand better SMEs' problems and needs for support. Copyright © 2015 Elsevier Ltd. All rights reserved.
7 CFR 43.104 - Master table of single and double sampling plans.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Master table of single and double sampling plans. 43... STANDARD CONTAINER REGULATIONS STANDARDS FOR SAMPLING PLANS Sampling Plans § 43.104 Master table of single and double sampling plans. (a) In the master table, a sampling plan is selected by first determining...
7 CFR 43.104 - Master table of single and double sampling plans.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Master table of single and double sampling plans. 43... STANDARD CONTAINER REGULATIONS STANDARDS FOR SAMPLING PLANS Sampling Plans § 43.104 Master table of single and double sampling plans. (a) In the master table, a sampling plan is selected by first determining...
7 CFR 43.104 - Master table of single and double sampling plans.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Master table of single and double sampling plans. 43... STANDARD CONTAINER REGULATIONS STANDARDS FOR SAMPLING PLANS Sampling Plans § 43.104 Master table of single and double sampling plans. (a) In the master table, a sampling plan is selected by first determining...
7 CFR 43.104 - Master table of single and double sampling plans.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Master table of single and double sampling plans. 43... STANDARD CONTAINER REGULATIONS STANDARDS FOR SAMPLING PLANS Sampling Plans § 43.104 Master table of single and double sampling plans. (a) In the master table, a sampling plan is selected by first determining...
Regional HLA Differences in Poland and Their Effect on Stem Cell Donor Registry Planning
Schmidt, Alexander H.; Solloch, Ute V.; Pingel, Julia; Sauter, Jürgen; Böhme, Irina; Cereb, Nezih; Dubicka, Kinga; Schumacher, Stephan; Wachowiak, Jacek; Ehninger, Gerhard
2013-01-01
Regional HLA frequency differences are of potential relevance for the optimization of stem cell donor recruitment. We analyzed a very large sample (n = 123,749) of registered Polish stem cell donors. Donor figures by 1-digit postal code regions ranged from n = 5,243 (region 9) to n = 19,661 (region 8). Simulations based on region-specific haplotype frequencies showed that donor recruitment in regions 0, 2, 3 and 4 (mainly located in the south-eastern part of Poland) resulted in an above-average increase of matching probabilities for Polish patients. Regions 1, 7, 8, 9 (mainly located in the northern part of Poland) showed an opposite behavior. However, HLA frequency differences between regions were generally small. A strong indication for regionally focused donor recruitment efforts can, therefore, not be derived from our analyses. Results of haplotype frequency estimations showed sample size effects even for sizes between n≈5,000 and n≈20,000. This observation deserves further attention as most published haplotype frequency estimations are based on much smaller samples. PMID:24069237
Carleton, R. Drew; Heard, Stephen B.; Silk, Peter J.
2013-01-01
Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with “pre-sampling” data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex) attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n∼100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand) was the most efficient, with sample means converging on true mean density for sample sizes of n∼25–40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods. PMID:24376556
Prevalence and characteristics of smokers at 30 Pacific Northwest colleges and universities.
Thompson, Beti; Coronado, Gloria; Chen, Lu; Thompson, L Anne; Halperin, Abigail; Jaffe, Robert; McAfee, Tim; Zbikowski, Susan M
2007-03-01
College is an important transition period during which young adults explore tobacco use. Few large-scale studies have been conducted among college students regarding tobacco use. We initiated a study examining tobacco use in 30 colleges and universities in the Pacific Northwest. We conducted a baseline survey among students. Sample size varied by the school size; for the 14 largest schools, we drew a random sample of all students, oversampling freshmen (n approximately 750) so that we could recruit and follow a cohort to assess smoking onset during the college years. Of the remaining students, we sampled equivalent numbers of sophomores, juniors, and seniors (n = 200 each). For the 16 schools with fewer than 1,350 students, we surveyed all students. We found overall smoking rates of 17.2%. Males (18.6%) were more likely to smoke than females (16.6%; p = .03), and public college students were more likely to smoke (20.5%) than those who attended private independent schools (18.9%; p = .61), whose rates were higher than those of private religious schools (11.6%; p = .001). Overall, college students are light smokers who do not smoke every day of the month. Further, they tend not to be highly dependent on tobacco, do not consider themselves regular smokers, and plan to quit before they graduate (56.8%). School type should be considered when estimating smoking rates among 4-year college students. Data indicate that college smokers wish and plan to quit before graduation, suggesting that efforts to assist smokers in quitting during the college years may be fruitful.
Panahbehagh, B.; Smith, D.R.; Salehi, M.M.; Hornbach, D.J.; Brown, D.J.; Chan, F.; Marinova, D.; Anderssen, R.S.
2011-01-01
Assessing populations of rare species is challenging because of the large effort required to locate patches of occupied habitat and achieve precise estimates of density and abundance. The presence of a rare species has been shown to be correlated with presence or abundance of more common species. Thus, ecological community richness or abundance can be used to inform sampling of rare species. Adaptive sampling designs have been developed specifically for rare and clustered populations and have been applied to a wide range of rare species. However, adaptive sampling can be logistically challenging, in part, because variation in final sample size introduces uncertainty in survey planning. Two-stage sequential sampling (TSS), a recently developed design, allows for adaptive sampling, but avoids edge units and has an upper bound on final sample size. In this paper we present an extension of two-stage sequential sampling that incorporates an auxiliary variable (TSSAV), such as community attributes, as the condition for adaptive sampling. We develop a set of simulations to approximate sampling of endangered freshwater mussels to evaluate the performance of the TSSAV design. The performance measures that we are interested in are efficiency and probability of sampling a unit occupied by the rare species. Efficiency measures the precision of population estimate from the TSSAV design relative to a standard design, such as simple random sampling (SRS). The simulations indicate that the density and distribution of the auxiliary population is the most important determinant of the performance of the TSSAV design. Of the design factors, such as sample size, the fraction of the primary units sampled was most important. For the best scenarios, the odds of sampling the rare species was approximately 1.5 times higher for TSSAV compared to SRS and efficiency was as high as 2 (i.e., variance from TSSAV was half that of SRS). We have found that design performance, especially for adaptive designs, is often case-specific. Efficiency of adaptive designs is especially sensitive to spatial distribution. We recommend that simulations tailored to the application of interest are highly useful for evaluating designs in preparation for sampling rare and clustered populations.
Raines, Amanda M; Capron, Daniel W; Stentz, Lauren A; Walton, Jessica L; Allan, Nicholas P; McManus, Eliza S; Uddo, Madeline; True, Gala; Franklin, C Laurel
2017-11-01
Although the relationship between posttraumatic stress disorder (PTSD) and suicide has been firmly established, research on underlying mechanisms has been disproportionately low. The cognitive concerns subscale of anxiety sensitivity (AS), which reflects fears of cognitive dyscontrol, has been linked to both PTSD and suicide and thus may serve as an explanatory mechanism between these constructs. The sample consisted of 60 male veterans presenting to an outpatient Veteran Affairs (VA) clinic for psychological services. Upon intake, veterans completed a diagnostic interview and brief battery of self-report questionnaires to assist with differential diagnosis and treatment planning. Results revealed a significant association between PTSD symptom severity and higher suicidality (i.e., ideation, plans, and impulses), even after accounting for relevant demographic and psychological constructs. Moreover, AS cognitive concerns mediated this association. Limitations include the small sample size and cross-sectional nature of the current study. These findings add considerably to a growing body of literature examining underlying mechanisms that may help to explain the robust associations between PTSD and suicide. Considering the malleable nature of AS cognitive concerns, research is needed to determine the extent to which reductions in this cognitive risk factor are associated with reductions in suicide among at risk samples, such as those included in the present investigation. Published by Elsevier B.V.
MCNP-based computational model for the Leksell gamma knife.
Trnka, Jiri; Novotny, Josef; Kluson, Jaroslav
2007-01-01
We have focused on the usage of MCNP code for calculation of Gamma Knife radiation field parameters with a homogenous polystyrene phantom. We have investigated several parameters of the Leksell Gamma Knife radiation field and compared the results with other studies based on EGS4 and PENELOPE code as well as the Leksell Gamma Knife treatment planning system Leksell GammaPlan (LGP). The current model describes all 201 radiation beams together and simulates all the sources in the same time. Within each beam, it considers the technical construction of the source, the source holder, collimator system, the spherical phantom, and surrounding material. We have calculated output factors for various sizes of scoring volumes, relative dose distributions along basic planes including linear dose profiles, integral doses in various volumes, and differential dose volume histograms. All the parameters have been calculated for each collimator size and for the isocentric configuration of the phantom. We have found the calculated output factors to be in agreement with other authors' works except the case of 4 mm collimator size, where averaging over the scoring volume and statistical uncertainties strongly influences the calculated results. In general, all the results are dependent on the choice of the scoring volume. The calculated linear dose profiles and relative dose distributions also match independent studies and the Leksell GammaPlan, but care must be taken about the fluctuations within the plateau, which can influence the normalization, and accuracy in determining the isocenter position, which is important for comparing different dose profiles. The calculated differential dose volume histograms and integral doses have been compared with data provided by the Leksell GammaPlan. The dose volume histograms are in good agreement as well as integral doses calculated in small calculation matrix volumes. However, deviations in integral doses up to 50% can be observed for large volumes such as for the total skull volume. The differences observed in treatment of scattered radiation between the MC method and the LGP may be important in this case. We have also studied the influence of differential direction sampling of primary photons and have found that, due to the anisotropic sampling, doses around the isocenter deviate from each other by up to 6%. With caution about the details of the calculation settings, it is possible to employ the MCNP Monte Carlo code for independent verification of the Leksell Gamma Knife radiation field properties.
NASA Technical Reports Server (NTRS)
Zong, Jin-Ho; Li, Benqiang; Szekely, Julian
1992-01-01
A mathematical formulation is given and computed results are presented describing the behavior of electromagnetically-levitated metal droplets under the conditions of microgravity. In the formulation the electromagnetic force field is calculated using a modification of the volume integral method and these results are then combined with the FIDAP code to calculate the steady state melt velocities. The specific computational results are presented for the conditions corresponding to the planned IML-2 Space Shuttle experiment, using the TEMPUS device, which has separate 'heating' and 'positioning' coils. While the computed results are necessarily specific to the input conditions, some general conclusions may be drawn from this work. These include the fact that for the planned TEMPUS experiments to positioning coils will produce only a weak melt circulation, while the heating coils are like to produce a mildly turbulent recirculating flow pattern within the samples. The computed results also allow us to assess the effect of sample size, material properties and the applied current on these phenomena.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, W; Ding, X; Hu, Y
Purpose: To investigate how spot size and spacing affect plan quality, especially, plan robustness and the impact of interplay effect, of robustly-optimized intensity-modulated proton therapy (IMPT) plans for lung cancer. Methods: Two robustly-optimized IMPT plans were created for 10 lung cancer patients: (1) one for a proton beam with in-air energy dependent large spot size at isocenter (σ: 5–15 mm) and spacing (1.53σ); (2) the other for a proton beam with small spot size (σ: 2–6 mm) and spacing (5 mm). Both plans were generated on the average CTs with internal-gross-tumor-volume density overridden to irradiate internal target volume (ITV). Themore » root-mean-square-dose volume histograms (RVH) measured the sensitivity of the dose to uncertainties, and the areas under RVH curves were used to evaluate plan robustness. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Patient anatomy voxels were mapped from phase to phase via deformable image registration to score doses. Dose-volume-histogram indices including ITV coverage, homogeneity, and organs-at-risk (OAR) sparing were compared using Student-t test. Results: Compared to large spots, small spots resulted in significantly better OAR sparing with comparable ITV coverage and homogeneity in the nominal plan. Plan robustness was comparable for ITV and most OARs. With interplay effect considered, significantly better OAR sparing with comparable ITV coverage and homogeneity is observed using smaller spots. Conclusion: Robust optimization with smaller spots significantly improves OAR sparing with comparable plan robustness and similar impact of interplay effect compare to larger spots. Small spot size requires the use of larger number of spots, which gives optimizer more freedom to render a plan more robust. The ratio between spot size and spacing was found to be more relevant to determine plan robustness and the impact of interplay effect than spot size alone. This research was supported by the National Cancer Institute Career Developmental Award K25CA168984, by the Fraternal Order of Eagles Cancer Research Fund Career Development Award, by The Lawrence W. and Marilyn W. Matteson Fund for Cancer Research, by Mayo Arizona State University Seed Grant, and by The Kemper Marley Foundation.« less
Effects of voxelization on dose volume histogram accuracy
NASA Astrophysics Data System (ADS)
Sunderland, Kyle; Pinter, Csaba; Lasso, Andras; Fichtinger, Gabor
2016-03-01
PURPOSE: In radiotherapy treatment planning systems, structures of interest such as targets and organs at risk are stored as 2D contours on evenly spaced planes. In order to be used in various algorithms, contours must be converted into binary labelmap volumes using voxelization. The voxelization process results in lost information, which has little effect on the volume of large structures, but has significant impact on small structures, which contain few voxels. Volume differences for segmented structures affects metrics such as dose volume histograms (DVH), which are used for treatment planning. Our goal is to evaluate the impact of voxelization on segmented structures, as well as how factors like voxel size affects metrics, such as DVH. METHODS: We create a series of implicit functions, which represent simulated structures. These structures are sampled at varying resolutions, and compared to labelmaps with high sub-millimeter resolutions. We generate DVH and evaluate voxelization error for the same structures at different resolutions by calculating the agreement acceptance percentage between the DVH. RESULTS: We implemented tools for analysis as modules in the SlicerRT toolkit based on the 3D Slicer platform. We found that there were large DVH variation from the baseline for small structures or for structures located in regions with a high dose gradient, potentially leading to the creation of suboptimal treatment plans. CONCLUSION: This work demonstrates that labelmap and dose volume voxel size is an important factor in DVH accuracy, which must be accounted for in order to ensure the development of accurate treatment plans.
Emergency response planning in hospitals, United States: 2003-2004.
Niska, Richard W; Burt, Catharine W
2007-08-20
This study presents baseline data to determine which hospital characteristics are associated with preparedness for terrorism and natural disaster in the areas of emergency response planning and availability of equipment and specialized care units. Information from the Bioterrorism and Mass Casualty Preparedness Supplements to the 2003 and 2004 National Hospital Ambulatory Medical Care Surveys was used to provide national estimates of variations in hospital emergency response plans and resources by residency and medical school affiliation, hospital size, ownership, metropolitan statistical area status, and Joint Commission accreditation. Of 874 sampled hospitals with emergency or outpatient departments, 739 responded for an 84.6 percent response rate. Estimates are presented with 95 percent confidence intervals. About 92 percent of hospitals had revised their emergency response plans since September 11, 2001, but only about 63 percent had addressed natural disasters and biological, chemical, radiological, and explosive terrorism in those plans. Only about 9 percent of hospitals had provided for all 10 of the response plan components studied. Hospitals had a mean of about 14 personal protective suits, 21 critical care beds, 12 mechanical ventilators, 7 negative pressure isolation rooms, and 2 decontamination showers each. Hospital bed capacity was the factor most consistently associated with emergency response planning and availability of resources.
Dahlberg, Suzanne E; Shapiro, Geoffrey I; Clark, Jeffrey W; Johnson, Bruce E
2014-07-01
Phase I trials have traditionally been designed to assess toxicity and establish phase II doses with dose-finding studies and expansion cohorts but are frequently exceeding the traditional sample size to further assess endpoints in specific patient subsets. The scientific objectives of phase I expansion cohorts and their evolving role in the current era of targeted therapies have yet to be systematically examined. Adult therapeutic phase I trials opened within Dana-Farber/Harvard Cancer Center (DF/HCC) from 1988 to 2012 were identified for sample size details. Statistical designs and study objectives of those submitted in 2011 were reviewed for expansion cohort details. Five hundred twenty-two adult therapeutic phase I trials were identified during the 25 years. The average sample size of a phase I study has increased from 33.8 patients to 73.1 patients over that time. The proportion of trials with planned enrollment of 50 or fewer patients dropped from 93.0% during the time period 1988 to 1992 to 46.0% between 2008 and 2012; at the same time, the proportion of trials enrolling 51 to 100 patients and more than 100 patients increased from 5.3% and 1.8%, respectively, to 40.5% and 13.5% (χ(2) test, two-sided P < .001). Sixteen of the 60 trials (26.7%) in 2011 enrolled patients to three or more sub-cohorts in the expansion phase. Sixty percent of studies provided no statistical justification of the sample size, although 91.7% of trials stated response as an objective. Our data suggest that phase I studies have dramatically changed in size and scientific scope within the last decade. Additional studies addressing the implications of this trend on research processes, ethical concerns, and resource burden are needed. © The Author 2014. Published by Oxford University Press. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fountain, Matthew S.; Fiskum, Sandra K.; Baldwin, David L.
This data package contains the K Basin sludge characterization results obtained by Pacific Northwest National Laboratory during processing and analysis of four sludge core samples collected from Engineered Container SCS-CON-210 in 2010 as requested by CH2M Hill Plateau Remediation Company. Sample processing requirements, analytes of interest, detection limits, and quality control sample requirements are defined in the KBC-33786, Rev. 2. The core processing scope included reconstitution of a sludge core sample distributed among four to six 4-L polypropylene bottles into a single container. The reconstituted core sample was then mixed and subsampled to support a variety of characterization activities. Additionalmore » core sludge subsamples were combined to prepare a container composite. The container composite was fractionated by wet sieving through a 2,000 micron mesh and a 500-micron mesh sieve. Each sieve fraction was sampled to support a suite of analyses. The core composite analysis scope included density determination, radioisotope analysis, and metals analysis, including the Waste Isolation Pilot Plant Hazardous Waste Facility Permit metals (with the exception of mercury). The container composite analysis included most of the core composite analysis scope plus particle size distribution, particle density, rheology, and crystalline phase identification. A summary of the received samples, core sample reconstitution and subsampling activities, container composite preparation and subsampling activities, physical properties, and analytical results are presented. Supporting data and documentation are provided in the appendices. There were no cases of sample or data loss and all of the available samples and data are reported as required by the Quality Assurance Project Plan/Sampling and Analysis Plan.« less
Comparison of chain sampling plans with single and double sampling plans
NASA Technical Reports Server (NTRS)
Stephens, K. S.; Dodge, H. F.
1976-01-01
The efficiency of chain sampling is examined through matching of operating characteristics (OC) curves of chain sampling plans (ChSP) with single and double sampling plans. In particular, the operating characteristics of some ChSP-0, 3 and 1, 3 as well as ChSP-0, 4 and 1, 4 are presented, where the number pairs represent the first and the second cumulative acceptance numbers. The fact that the ChSP procedure uses cumulative results from two or more samples and that the parameters can be varied to produce a wide variety of operating characteristics raises the question whether it may be possible for such plans to provide a given protection with less inspection than with single or double sampling plans. The operating ratio values reported illustrate the possibilities of matching single and double sampling plans with ChSP. It is shown that chain sampling plans provide improved efficiency over single and double sampling plans having substantially the same operating characteristics.
Reboussin, Beth A; Preisser, John S; Song, Eun-Young; Wolfson, Mark
2012-07-01
Under-age drinking is an enormous public health issue in the USA. Evidence that community level structures may impact on under-age drinking has led to a proliferation of efforts to change the environment surrounding the use of alcohol. Although the focus of these efforts is to reduce drinking by individual youths, environmental interventions are typically implemented at the community level with entire communities randomized to the same intervention condition. A distinct feature of these trials is the tendency of the behaviours of individuals residing in the same community to be more alike than that of others residing in different communities, which is herein called 'clustering'. Statistical analyses and sample size calculations must account for this clustering to avoid type I errors and to ensure an appropriately powered trial. Clustering itself may also be of scientific interest. We consider the alternating logistic regressions procedure within the population-averaged modelling framework to estimate the effect of a law enforcement intervention on the prevalence of under-age drinking behaviours while modelling the clustering at multiple levels, e.g. within communities and within neighbourhoods nested within communities, by using pairwise odds ratios. We then derive sample size formulae for estimating intervention effects when planning a post-test-only or repeated cross-sectional community-randomized trial using the alternating logistic regressions procedure.
NectarCAM, a camera for the medium sized telescopes of the Cherenkov telescope array
NASA Astrophysics Data System (ADS)
Glicenstein, J.-F.; Shayduk, M.
2017-01-01
NectarCAM is a camera proposed for the medium-sized telescopes of the Cherenkov Telescope Array (CTA) which covers the core energy range of 100 GeV to 30 TeV. It has a modular design and is based on the NECTAr chip, at the heart of which is a GHz sampling Switched Capacitor Array and 12-bit Analog to Digital converter. The camera will be equipped with 265 7-photomultiplier modules, covering a field of view of 8 degrees. Each module includes photomultiplier bases, high voltage supply, pre-amplifier, trigger, readout and Ethernet transceiver. The recorded events last between a few nanoseconds and tens of nanoseconds. The expected performance of the camera are discussed. Prototypes of NectarCAM components have been built to validate the design. Preliminary results of a 19-module mini-camera are presented, as well as future plans for building and testing a full size camera.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Y; Giebeler, A; Mascia, A
Purpose: To quantitatively evaluate dosimetric consequence of spot size variations and validate beam-matching criteria for commissioning a pencil beam model for multiple treatment rooms. Methods: A planning study was first conducted by simulating spot size variations to systematically evaluate dosimetric impact of spot size variations in selected cases, which was used to establish the in-air spot size tolerance for beam matching specifications. A beam model in treatment planning system was created using in-air spot profiles acquired in one treatment room. These spot profiles were also acquired from another treatment room for assessing the actual spot size variations between the twomore » treatment rooms. We created twenty five test plans with targets of different sizes at different depths, and performed dose measurement along the entrance, proximal and distal target regions. The absolute doses at those locations were measured using ionization chambers at both treatment rooms, and were compared against the calculated doses by the beam model. Fifteen additional patient plans were also measured and included in our validation. Results: The beam model is relatively insensitive to spot size variations. With an average of less than 15% measured in-air spot size variations between two treatment rooms, the average dose difference was −0.15% with a standard deviation of 0.40% for 55 measurement points within target region; but the differences increased to 1.4%±1.1% in the entrance regions, which are more affected by in-air spot size variations. Overall, our single-room based beam model in the treatment planning system agreed with measurements in both rooms < 0.5% within the target region. For fifteen patient cases, the agreement was within 1%. Conclusion: We have demonstrated that dosimetrically equivalent machines can be established when in-air spot size variations are within 15% between the two treatment rooms.« less
NASA Astrophysics Data System (ADS)
Collier, Jordan; Filipovic, Miroslav; Norris, Ray; Chow, Kate; Huynh, Minh; Banfield, Julie; Tothill, Nick; Sirothia, Sandeep Kumar; Shabala, Stanislav
2014-04-01
This proposal is a continuation of an extensive project (the core of Collier's PhD) to explore the earliest stages of AGN formation, using Gigahertz-Peaked Spectrum (GPS) and Compact Steep Spectrum (CSS) sources. Both are widely believed to represent the earliest stages of radio-loud AGN evolution, with GPS sources preceding CSS sources. In this project, we plan to (a) test this hypothesis, (b) place GPS and CSS sources into an evolutionary sequence with a number of other young AGN candidates, and (c) search for evidence of the evolving accretion mode. We will do this using high-resolution radio observations, with a number of other multiwavelength age indicators, of a carefully selected complete faint sample of 80 GPS/CSS sources. Analysis of the C2730 ELAIS-S1 data shows that we have so far met our goals, resolving the jets of 10/49 sources, and measuring accurate spectral indices from 0.843-10 GHz. This particular proposal is to almost triple the sample size by observing an additional 80 GPS/CSS sources in the Chandra Deep Field South (arguably the best-studied field) and allow a turnover frequency - linear size relation to be derived at >10-sigma. Sources found to be unresolved in our final sample will subsequently be observed with VLBI. Comparing those sources resolved with ATCA to the more compact sources resolved with VLBI will give a distribution of source sizes, helping to answer the question of whether all GPS/CSS sources grow to larger sizes.
Knapp, F; Viechtbauer, W; Leonhart, R; Nitschke, K; Kaller, C P
2017-08-01
Despite a large body of research on planning performance in adult schizophrenia patients, results of individual studies are equivocal, suggesting either no, moderate or severe planning deficits. This meta-analysis therefore aimed to quantify planning deficits in schizophrenia and to examine potential sources of the heterogeneity seen in the literature. The meta-analysis comprised outcomes of planning accuracy of 1377 schizophrenia patients and 1477 healthy controls from 31 different studies which assessed planning performance using tower tasks such as the Tower of London, the Tower of Hanoi and the Stockings of Cambridge. A meta-regression analysis was applied to assess the influence of potential moderator variables (i.e. sociodemographic and clinical variables as well as task difficulty). The findings indeed demonstrated a planning deficit in schizophrenia patients (mean effect size: ; 95% confidence interval 0.56-0.78) that was moderated by task difficulty in terms of the minimum number of moves required for a solution. The results did not reveal any significant relationship between the extent of planning deficits and sociodemographic or clinical variables. The current results provide first meta-analytic evidence for the commonly assumed impairments of planning performance in schizophrenia. Deficits are more likely to become manifest in problem items with higher demands on planning ahead, which may at least partly explain the heterogeneity of previous findings. As only a small fraction of studies reported coherent information on sample characteristics, future meta-analyses would benefit from more systematic reports on those variables.
Statistical methods for identifying and bounding a UXO target area or minefield
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKinstry, Craig A.; Pulsipher, Brent A.; Gilbert, Richard O.
2003-09-18
The sampling unit for minefield or UXO area characterization is typically represented by a geographical block or transect swath that lends itself to characterization by geophysical instrumentation such as mobile sensor arrays. New spatially based statistical survey methods and tools, more appropriate for these unique sampling units have been developed and implemented at PNNL (Visual Sample Plan software, ver. 2.0) with support from the US Department of Defense. Though originally developed to support UXO detection and removal efforts, these tools may also be used in current form or adapted to support demining efforts and aid in the development of newmore » sensors and detection technologies by explicitly incorporating both sampling and detection error in performance assessments. These tools may be used to (1) determine transect designs for detecting and bounding target areas of critical size, shape, and density of detectable items of interest with a specified confidence probability, (2) evaluate the probability that target areas of a specified size, shape and density have not been missed by a systematic or meandering transect survey, and (3) support post-removal verification by calculating the number of transects required to achieve a specified confidence probability that no UXO or mines have been missed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beltran, C; Kamal, H
Purpose: To provide a multicriteria optimization algorithm for intensity modulated radiation therapy using pencil proton beam scanning. Methods: Intensity modulated radiation therapy using pencil proton beam scanning requires efficient optimization algorithms to overcome the uncertainties in the Bragg peaks locations. This work is focused on optimization algorithms that are based on Monte Carlo simulation of the treatment planning and use the weights and the dose volume histogram (DVH) control points to steer toward desired plans. The proton beam treatment planning process based on single objective optimization (representing a weighted sum of multiple objectives) usually leads to time-consuming iterations involving treatmentmore » planning team members. We proved a time efficient multicriteria optimization algorithm that is developed to run on NVIDIA GPU (Graphical Processing Units) cluster. The multicriteria optimization algorithm running time benefits from up-sampling of the CT voxel size of the calculations without loss of fidelity. Results: We will present preliminary results of Multicriteria optimization for intensity modulated proton therapy based on DVH control points. The results will show optimization results of a phantom case and a brain tumor case. Conclusion: The multicriteria optimization of the intensity modulated radiation therapy using pencil proton beam scanning provides a novel tool for treatment planning. Work support by a grant from Varian Inc.« less
40 CFR 141.802 - Coliform sampling plan.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Coliform sampling plan. 141.802... sampling plan. (a) Each air carrier under this subpart must develop a coliform sampling plan covering each... required actions, including repeat and follow-up sampling, corrective action, and notification of...
40 CFR 141.802 - Coliform sampling plan.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Coliform sampling plan. 141.802... sampling plan. (a) Each air carrier under this subpart must develop a coliform sampling plan covering each... required actions, including repeat and follow-up sampling, corrective action, and notification of...
Chen, Yumin; Fritz, Ronald D; Kock, Lindsay; Garg, Dinesh; Davis, R Mark; Kasturi, Prabhakar
2018-02-01
A step-wise, 'test-all-positive-gluten' analytical methodology has been developed and verified to assess kernel-based gluten contamination (i.e., wheat, barley and rye kernels) during gluten-free (GF) oat production. It targets GF claim compliance at the serving-size level (of a pouch or approximately 40-50g). Oat groats are collected from GF oat production following a robust attribute-based sampling plan then split into 75-g subsamples, and ground. R-Biopharm R5 sandwich ELISA R7001 is used for analysis of all the first15-g portions of the ground sample. A >20-ppm result disqualifies the production lot, while a >5 to <20-ppm result triggers complete analysis of the remaining 60-g of ground sample, analyzed in 15-g portions. If all five 15-g test results are <20ppm, and their average is <10.67ppm (since a 20-ppm contaminant in 40g of oats would dilute to 10.67ppm in 75-g), the lot is passed. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Authorization of Animal Experiments Is Based on Confidence Rather than Evidence of Scientific Rigor
Nathues, Christina; Würbel, Hanno
2016-01-01
Accumulating evidence indicates high risk of bias in preclinical animal research, questioning the scientific validity and reproducibility of published research findings. Systematic reviews found low rates of reporting of measures against risks of bias in the published literature (e.g., randomization, blinding, sample size calculation) and a correlation between low reporting rates and inflated treatment effects. That most animal research undergoes peer review or ethical review would offer the possibility to detect risks of bias at an earlier stage, before the research has been conducted. For example, in Switzerland, animal experiments are licensed based on a detailed description of the study protocol and a harm–benefit analysis. We therefore screened applications for animal experiments submitted to Swiss authorities (n = 1,277) for the rates at which the use of seven basic measures against bias (allocation concealment, blinding, randomization, sample size calculation, inclusion/exclusion criteria, primary outcome variable, and statistical analysis plan) were described and compared them with the reporting rates of the same measures in a representative sub-sample of publications (n = 50) resulting from studies described in these applications. Measures against bias were described at very low rates, ranging on average from 2.4% for statistical analysis plan to 19% for primary outcome variable in applications for animal experiments, and from 0.0% for sample size calculation to 34% for statistical analysis plan in publications from these experiments. Calculating an internal validity score (IVS) based on the proportion of the seven measures against bias, we found a weak positive correlation between the IVS of applications and that of publications (Spearman’s rho = 0.34, p = 0.014), indicating that the rates of description of these measures in applications partly predict their rates of reporting in publications. These results indicate that the authorities licensing animal experiments are lacking important information about experimental conduct that determines the scientific validity of the findings, which may be critical for the weight attributed to the benefit of the research in the harm–benefit analysis. Similar to manuscripts getting accepted for publication despite poor reporting of measures against bias, applications for animal experiments may often be approved based on implicit confidence rather than explicit evidence of scientific rigor. Our findings shed serious doubt on the current authorization procedure for animal experiments, as well as the peer-review process for scientific publications, which in the long run may undermine the credibility of research. Developing existing authorization procedures that are already in place in many countries towards a preregistration system for animal research is one promising way to reform the system. This would not only benefit the scientific validity of findings from animal experiments but also help to avoid unnecessary harm to animals for inconclusive research. PMID:27911892
Authorization of Animal Experiments Is Based on Confidence Rather than Evidence of Scientific Rigor.
Vogt, Lucile; Reichlin, Thomas S; Nathues, Christina; Würbel, Hanno
2016-12-01
Accumulating evidence indicates high risk of bias in preclinical animal research, questioning the scientific validity and reproducibility of published research findings. Systematic reviews found low rates of reporting of measures against risks of bias in the published literature (e.g., randomization, blinding, sample size calculation) and a correlation between low reporting rates and inflated treatment effects. That most animal research undergoes peer review or ethical review would offer the possibility to detect risks of bias at an earlier stage, before the research has been conducted. For example, in Switzerland, animal experiments are licensed based on a detailed description of the study protocol and a harm-benefit analysis. We therefore screened applications for animal experiments submitted to Swiss authorities (n = 1,277) for the rates at which the use of seven basic measures against bias (allocation concealment, blinding, randomization, sample size calculation, inclusion/exclusion criteria, primary outcome variable, and statistical analysis plan) were described and compared them with the reporting rates of the same measures in a representative sub-sample of publications (n = 50) resulting from studies described in these applications. Measures against bias were described at very low rates, ranging on average from 2.4% for statistical analysis plan to 19% for primary outcome variable in applications for animal experiments, and from 0.0% for sample size calculation to 34% for statistical analysis plan in publications from these experiments. Calculating an internal validity score (IVS) based on the proportion of the seven measures against bias, we found a weak positive correlation between the IVS of applications and that of publications (Spearman's rho = 0.34, p = 0.014), indicating that the rates of description of these measures in applications partly predict their rates of reporting in publications. These results indicate that the authorities licensing animal experiments are lacking important information about experimental conduct that determines the scientific validity of the findings, which may be critical for the weight attributed to the benefit of the research in the harm-benefit analysis. Similar to manuscripts getting accepted for publication despite poor reporting of measures against bias, applications for animal experiments may often be approved based on implicit confidence rather than explicit evidence of scientific rigor. Our findings shed serious doubt on the current authorization procedure for animal experiments, as well as the peer-review process for scientific publications, which in the long run may undermine the credibility of research. Developing existing authorization procedures that are already in place in many countries towards a preregistration system for animal research is one promising way to reform the system. This would not only benefit the scientific validity of findings from animal experiments but also help to avoid unnecessary harm to animals for inconclusive research.
Planning Risk-Based SQC Schedules for Bracketed Operation of Continuous Production Analyzers.
Westgard, James O; Bayat, Hassan; Westgard, Sten A
2018-02-01
To minimize patient risk, "bracketed" statistical quality control (SQC) is recommended in the new CLSI guidelines for SQC (C24-Ed4). Bracketed SQC requires that a QC event both precedes and follows (brackets) a group of patient samples. In optimizing a QC schedule, the frequency of QC or run size becomes an important planning consideration to maintain quality and also facilitate responsive reporting of results from continuous operation of high production analytic systems. Different plans for optimizing a bracketed SQC schedule were investigated on the basis of Parvin's model for patient risk and CLSI C24-Ed4's recommendations for establishing QC schedules. A Sigma-metric run size nomogram was used to evaluate different QC schedules for processes of different sigma performance. For high Sigma performance, an effective SQC approach is to employ a multistage QC procedure utilizing a "startup" design at the beginning of production and a "monitor" design periodically throughout production. Example QC schedules are illustrated for applications with measurement procedures having 6-σ, 5-σ, and 4-σ performance. Continuous production analyzers that demonstrate high σ performance can be effectively controlled with multistage SQC designs that employ a startup QC event followed by periodic monitoring or bracketing QC events. Such designs can be optimized to minimize the risk of harm to patients. © 2017 American Association for Clinical Chemistry.
Thieke, Christian; Nill, Simeon; Oelfke, Uwe; Bortfeld, Thomas
2002-05-01
In inverse planning for intensity-modulated radiotherapy, the dose calculation is a crucial element limiting both the maximum achievable plan quality and the speed of the optimization process. One way to integrate accurate dose calculation algorithms into inverse planning is to precalculate the dose contribution of each beam element to each voxel for unit fluence. These precalculated values are stored in a big dose calculation matrix. Then the dose calculation during the iterative optimization process consists merely of matrix look-up and multiplication with the actual fluence values. However, because the dose calculation matrix can become very large, this ansatz requires a lot of computer memory and is still very time consuming, making it not practical for clinical routine without further modifications. In this work we present a new method to significantly reduce the number of entries in the dose calculation matrix. The method utilizes the fact that a photon pencil beam has a rapid radial dose falloff, and has very small dose values for the most part. In this low-dose part of the pencil beam, the dose contribution to a voxel is only integrated into the dose calculation matrix with a certain probability. Normalization with the reciprocal of this probability preserves the total energy, even though many matrix elements are omitted. Three probability distributions were tested to find the most accurate one for a given memory size. The sampling method is compared with the use of a fully filled matrix and with the well-known method of just cutting off the pencil beam at a certain lateral distance. A clinical example of a head and neck case is presented. It turns out that a sampled dose calculation matrix with only 1/3 of the entries of the fully filled matrix does not sacrifice the quality of the resulting plans, whereby the cutoff method results in a suboptimal treatment plan.
Goodman, Michael; Dana Flanders, W
2007-04-01
We compare methodological approaches for evaluating gene-environment interaction using a planned study of pediatric leukemia as a practical example. We considered three design options: a full case-control study (Option I), a case-only study (Option II), and a partial case-control study (Option III), in which information on controls is limited to environmental exposure only. For each design option we determined its ability to measure the main effects of environmental factor E and genetic factor G, and the interaction between E and G. Using the leukemia study example, we calculated sample sizes required to detect and odds ratio (OR) of 2.0 for E alone, an OR of 10 for G alone and an interaction G x E of 3. Option I allows measuring both main effects and interaction, but requires a total sample size of 1,500 cases and 1,500 controls. Option II allows measuring only interaction, but requires just 121 cases. Option III allows calculating the main effect of E, and interaction, but not the main effect of G, and requires a total of 156 cases and 133 controls. In this case, the partial case-control study (Option III) appears to be more efficient with respect to its ability to answer the research questions for the amount of resources required. The design options considered in this example are not limited to observational epidemiology and may be applicable in studies of pharmacogenomics, survivorship, and other areas of pediatric ALL research.
McClure, Leslie A; Szychowski, Jeff M; Benavente, Oscar; Hart, Robert G; Coffey, Christopher S
2016-10-01
The use of adaptive designs has been increasing in randomized clinical trials. Sample size re-estimation is a type of adaptation in which nuisance parameters are estimated at an interim point in the trial and the sample size re-computed based on these estimates. The Secondary Prevention of Small Subcortical Strokes study was a randomized clinical trial assessing the impact of single- versus dual-antiplatelet therapy and control of systolic blood pressure to a higher (130-149 mmHg) versus lower (<130 mmHg) target on recurrent stroke risk in a two-by-two factorial design. A sample size re-estimation was performed during the Secondary Prevention of Small Subcortical Strokes study resulting in an increase from the planned sample size of 2500-3020, and we sought to determine the impact of the sample size re-estimation on the study results. We assessed the results of the primary efficacy and safety analyses with the full 3020 patients and compared them to the results that would have been observed had randomization ended with 2500 patients. The primary efficacy outcome considered was recurrent stroke, and the primary safety outcomes were major bleeds and death. We computed incidence rates for the efficacy and safety outcomes and used Cox proportional hazards models to examine the hazard ratios for each of the two treatment interventions (i.e. the antiplatelet and blood pressure interventions). In the antiplatelet intervention, the hazard ratio was not materially modified by increasing the sample size, nor did the conclusions regarding the efficacy of mono versus dual-therapy change: there was no difference in the effect of dual- versus monotherapy on the risk of recurrent stroke hazard ratios (n = 3020 HR (95% confidence interval): 0.92 (0.72, 1.2), p = 0.48; n = 2500 HR (95% confidence interval): 1.0 (0.78, 1.3), p = 0.85). With respect to the blood pressure intervention, increasing the sample size resulted in less certainty in the results, as the hazard ratio for higher versus lower systolic blood pressure target approached, but did not achieve, statistical significance with the larger sample (n = 3020 HR (95% confidence interval): 0.81 (0.63, 1.0), p = 0.089; n = 2500 HR (95% confidence interval): 0.89 (0.68, 1.17), p = 0.40). The results from the safety analyses were similar to 3020 and 2500 patients for both study interventions. Other trial-related factors, such as contracts, finances, and study management, were impacted as well. Adaptive designs can have benefits in randomized clinical trials, but do not always result in significant findings. The impact of adaptive designs should be measured in terms of both trial results, as well as practical issues related to trial management. More post hoc analyses of study adaptations will lead to better understanding of the balance between the benefits and the costs. © The Author(s) 2016.
A Total Quality-Control Plan with Right-Sized Statistical Quality-Control.
Westgard, James O
2017-03-01
A new Clinical Laboratory Improvement Amendments option for risk-based quality-control (QC) plans became effective in January, 2016. Called an Individualized QC Plan, this option requires the laboratory to perform a risk assessment, develop a QC plan, and implement a QC program to monitor ongoing performance of the QC plan. Difficulties in performing a risk assessment may limit validity of an Individualized QC Plan. A better alternative is to develop a Total QC Plan including a right-sized statistical QC procedure to detect medically important errors. Westgard Sigma Rules provides a simple way to select the right control rules and the right number of control measurements. Copyright © 2016 Elsevier Inc. All rights reserved.
Pediatric access to dermatologists: Medicaid versus private insurance.
Chaudhry, Sofia B; Armbrecht, Eric S; Shin, Yoon; Matula, Sarah; Caffrey, Charles; Varade, Reena; Jones, Lisa; Siegfried, Elaine
2013-05-01
There is disparity in access to outpatient care for Medicaid beneficiaries. This inequity disproportionately impacts children. Access for children with skin disease may be especially limited. We sought to compare access to dermatologists for new pediatric patients insured by Medicaid versus a private plan. We surveyed 13 metropolitan markets by conducting secret-shopper scripted telephone calls to dermatology providers listed by Medicaid health plans. Paired calls, differing by insurance type, were made to each office on the same day, portraying a parent requesting a new appointment for a child with eczema. We called the offices of 723 Medicaid-listed providers. Final analysis included 471 dermatologists practicing general dermatology. Of these, an average of 44% refused a new Medicaid-insured pediatric patient. The average wait time for an appointment did not significantly vary between insurance types. Assuming that dermatologists not listed as Medicaid providers do not see Medicaid-insured children, our data indicate that pediatric Medicaid acceptance rates ranged from 6% to 64% by market, with an overall market size-weighted average acceptance rate of 19%. Relative reimbursement levels for Medicaid-insured patients did not correlate with acceptance rates. Although the most current health plan directories were used to create calling lists, these are dynamic. The sample sizes of confirmed appointments were in part limited by a lack of referral letters and/or health plan identification numbers. Only confirmed appointments were used to calculate average wait times. Access to dermatologists is limited for Medicaid-insured children with eczema. Copyright © 2012 American Academy of Dermatology, Inc. Published by Mosby, Inc. All rights reserved.
Buck Louis, Germaine M; Schisterman, Enrique F; Sweeney, Anne M; Wilcosky, Timothy C; Gore-Langton, Robert E; Lynch, Courtney D; Boyd Barr, Dana; Schrader, Steven M; Kim, Sungduk; Chen, Zhen; Sundaram, Rajeshwari
2011-09-01
The relationship between the environment and human fecundity and fertility remains virtually unstudied from a couple-based perspective in which longitudinal exposure data and biospecimens are captured across sensitive windows. In response, we completed the LIFE Study with methodology that intended to empirically evaluate a priori purported methodological challenges: implementation of population-based sampling frameworks suitable for recruiting couples planning pregnancy; obtaining environmental data across sensitive windows of reproduction and development; home-based biospecimen collection; and development of a data management system for hierarchical exposome data. We used two sampling frameworks (i.e., fish/wildlife licence registry and a direct marketing database) for 16 targeted counties with presumed environmental exposures to persistent organochlorine chemicals to recruit 501 couples planning pregnancies for prospective longitudinal follow-up while trying to conceive and throughout pregnancy. Enrolment rates varied from <1% of the targeted population (n = 424,423) to 42% of eligible couples who were successfully screened; 84% of the targeted population could not be reached, while 36% refused screening. Among enrolled couples, ∼ 85% completed daily journals while trying; 82% of pregnant women completed daily early pregnancy journals, and 80% completed monthly pregnancy journals. All couples provided baseline blood/urine samples; 94% of men provided one or more semen samples and 98% of women provided one or more saliva samples. Women successfully used urinary fertility monitors for identifying ovulation and home pregnancy test kits. Couples can be recruited for preconception cohorts and will comply with intensive data collection across sensitive windows. However, appropriately sized sampling frameworks are critical, given the small percentage of couples contacted found eligible and reportedly planning pregnancy at any point in time. © Published 2011. This article is a US Government work and is in the public domain in the USA.
OSIRIS-REx Flight Dynamics and Navigation Design
NASA Astrophysics Data System (ADS)
Williams, B.; Antreasian, P.; Carranza, E.; Jackman, C.; Leonard, J.; Nelson, D.; Page, B.; Stanbridge, D.; Wibben, D.; Williams, K.; Moreau, M.; Berry, K.; Getzandanner, K.; Liounis, A.; Mashiku, A.; Highsmith, D.; Sutter, B.; Lauretta, D. S.
2018-06-01
OSIRIS-REx is the first NASA mission to return a sample of an asteroid to Earth. Navigation and flight dynamics for the mission to acquire and return a sample of asteroid 101955 Bennu establish many firsts for space exploration. These include relatively small orbital maneuvers that are precise to ˜1 mm/s, close-up operations in a captured orbit about an asteroid that is small in size and mass, and planning and orbit phasing to revisit the same spot on Bennu in similar lighting conditions. After preliminary surveys and close approach flyovers of Bennu, the sample site will be scientifically characterized and selected. A robotic shock-absorbing arm with an attached sample collection head mounted on the main spacecraft bus acquires the sample, requiring navigation to Bennu's surface. A touch-and-go sample acquisition maneuver will result in the retrieval of at least 60 grams of regolith, and up to several kilograms. The flight activity concludes with a return cruise to Earth and delivery of the sample return capsule (SRC) for landing and sample recovery at the Utah Test and Training Range (UTTR).
7 CFR 42.104 - Sampling plans and defects.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Sampling plans and defects. 42.104 Section 42.104... REGULATIONS STANDARDS FOR CONDITION OF FOOD CONTAINERS Procedures for Stationary Lot Sampling and Inspection § 42.104 Sampling plans and defects. (a) Sampling plans. Sections 42.109 through 42.111 show the number...
7 CFR 42.104 - Sampling plans and defects.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Sampling plans and defects. 42.104 Section 42.104... REGULATIONS STANDARDS FOR CONDITION OF FOOD CONTAINERS Procedures for Stationary Lot Sampling and Inspection § 42.104 Sampling plans and defects. (a) Sampling plans. Sections 42.109 through 42.111 show the number...
Pressey, Robert L.; Weeks, Rebecca; Andréfouët, Serge; Moloney, James
2016-01-01
Spatial data characteristics have the potential to influence various aspects of prioritising biodiversity areas for systematic conservation planning. There has been some exploration of the combined effects of size of planning units and level of classification of physical environments on the pattern and extent of priority areas. However, these data characteristics have yet to be explicitly investigated in terms of their interaction with different socioeconomic cost data during the spatial prioritisation process. We quantify the individual and interacting effects of three factors—planning-unit size, thematic resolution of reef classes, and spatial variability of socioeconomic costs—on spatial priorities for marine conservation, in typical marine planning exercises that use reef classification maps as a proxy for biodiversity. We assess these factors by creating 20 unique prioritisation scenarios involving combinations of different levels of each factor. Because output data from these scenarios are analogous to ecological data, we applied ecological statistics to determine spatial similarities between reserve designs. All three factors influenced prioritisations to different extents, with cost variability having the largest influence, followed by planning-unit size and thematic resolution of reef classes. The effect of thematic resolution on spatial design depended on the variability of cost data used. In terms of incidental representation of conservation objectives derived from finer-resolution data, scenarios prioritised with uniform cost outperformed those prioritised with variable cost. Following our analyses, we make recommendations to help maximise the spatial and cost efficiency and potential effectiveness of future marine conservation plans in similar planning scenarios. We recommend that planners: employ the smallest planning-unit size practical; invest in data at the highest possible resolution; and, when planning across regional extents with the intention of incidentally representing fine-resolution features, prioritise the whole region with uniform costs rather than using coarse-resolution data on variable costs. PMID:27829042
Cheok, Jessica; Pressey, Robert L; Weeks, Rebecca; Andréfouët, Serge; Moloney, James
2016-01-01
Spatial data characteristics have the potential to influence various aspects of prioritising biodiversity areas for systematic conservation planning. There has been some exploration of the combined effects of size of planning units and level of classification of physical environments on the pattern and extent of priority areas. However, these data characteristics have yet to be explicitly investigated in terms of their interaction with different socioeconomic cost data during the spatial prioritisation process. We quantify the individual and interacting effects of three factors-planning-unit size, thematic resolution of reef classes, and spatial variability of socioeconomic costs-on spatial priorities for marine conservation, in typical marine planning exercises that use reef classification maps as a proxy for biodiversity. We assess these factors by creating 20 unique prioritisation scenarios involving combinations of different levels of each factor. Because output data from these scenarios are analogous to ecological data, we applied ecological statistics to determine spatial similarities between reserve designs. All three factors influenced prioritisations to different extents, with cost variability having the largest influence, followed by planning-unit size and thematic resolution of reef classes. The effect of thematic resolution on spatial design depended on the variability of cost data used. In terms of incidental representation of conservation objectives derived from finer-resolution data, scenarios prioritised with uniform cost outperformed those prioritised with variable cost. Following our analyses, we make recommendations to help maximise the spatial and cost efficiency and potential effectiveness of future marine conservation plans in similar planning scenarios. We recommend that planners: employ the smallest planning-unit size practical; invest in data at the highest possible resolution; and, when planning across regional extents with the intention of incidentally representing fine-resolution features, prioritise the whole region with uniform costs rather than using coarse-resolution data on variable costs.
Choy-Brown, Mimi; Hamovitch, Emily K; Cuervo, Carolina; Stanhope, Victoria
2016-12-01
This study aimed to understand multiple stakeholder perspectives implementing a recovery-oriented approach to service planning in supportive housing programs serving people with lived experience of mental illnesses. Multiple stakeholders (N = 57) were recruited to participate in focus groups (N = 8), including 4 with tenants, 2 with service coordinators, 1 with supervisors, and 1 with leadership. Supportive housing programs were purposively sampled from a recovery-oriented organization serving 1,500 people annually. Stakeholders' experiences with service planning and implementing a recovery-oriented approach to service planning were explored. The authors conducted inductive thematic analyses combined with a conceptual matrix, which yielded themes across and within multiple stakeholder focus groups. Three themes emerged: (a) an institutional reminder-service planning experiences elicited negative emotions and served to remind people of experiences in institutional settings, (b) one-size-fits-all service planning-stakeholders perceived the use of quality assurance tools within the planning process as rigid to others' interests beyond their own, and (c) rules and regulations-reconciling funder requirements (e.g., completion dates) while also tailoring services to tenants' particular situations challenged providers. Even in a recovery-oriented organization, findings suggest that service planning in supportive housing has limitations in responding to each tenant's iterative recovery process. Further, in this context where people can make their home, stakeholders questioned whether the very presence of ongoing service planning activities is problematic. However, tenant-service coordinator relationships predicated on mutual respect and esteem overcame some service planning limitations. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Sampling Strategy and Curation Plan of "Hayabusa" Asteroid Sample Return Mission
NASA Technical Reports Server (NTRS)
Yano, H.; Fujiwara, A.; Abe, M.; Hasegawa, S.; Kushiro, I.; Zolensky, M. E.
2004-01-01
On the 9th May 2003 JST, Japanese spacecraft MUSES-C was successfully launched from Uchinoura. The spacecraft was directly inserted to interplanetary trajectory and renamed as Hayabusa , or "Falcon" to be the world s first sample return spacecraft to a near Earth asteroid (NEA). The NEA (25143)Itokawa (formerly known as "1998SF36") is its mission target. Its orbital and physical characteristics were well observed; the size is (490 +/- 100)x (250 +/- 55)x(180 +/- 50) m with about 12-hour rotation period. It has a red-sloped S(IV)-type spectrum with strong 1- and 2-micron absorption bands, analogous to ordinary LL chondrites with space weathering effect. Assuming its bulk density, the surface gravity level of Itokawa is in the order of 10 micro-G with its escape velocity = approx. 20 cm/s.
Ritchie, S A; Addison, D S; van Essen, F
1992-03-01
The distribution of Aedes taeniorhynchus eggshells in Florida mangrove basin forests was determined and used to design a sampling plan. Eggshells were found in 10/11 sites (91%), with a mean +/- SE density of 1.45 +/- 0.75/cc; density did not change significantly year to year. Highest densities were located on the sloping banks of hummocks, ponds and potholes. Eggshells were less clumped in distribution than eggs and larvae and thus required a smaller sample size for a given precision level. While eggshells were flushed from compact soil that was subject to runoff during heavy rain, mangrove peat, the dominant soil of eggshell-bearing sites, was less dense and had little runoff or eggshell flushing. We suggest that eggshell surveys could be used to identify Ae. taeniorhynchus oviposition sites and oviposition patterns.
Independent Monte-Carlo dose calculation for MLC based CyberKnife radiotherapy
NASA Astrophysics Data System (ADS)
Mackeprang, P.-H.; Vuong, D.; Volken, W.; Henzen, D.; Schmidhalter, D.; Malthaner, M.; Mueller, S.; Frei, D.; Stampanoni, M. F. M.; Dal Pra, A.; Aebersold, D. M.; Fix, M. K.; Manser, P.
2018-01-01
This work aims to develop, implement and validate a Monte Carlo (MC)-based independent dose calculation (IDC) framework to perform patient-specific quality assurance (QA) for multi-leaf collimator (MLC)-based CyberKnife® (Accuray Inc., Sunnyvale, CA) treatment plans. The IDC framework uses an XML-format treatment plan as exported from the treatment planning system (TPS) and DICOM format patient CT data, an MC beam model using phase spaces, CyberKnife MLC beam modifier transport using the EGS++ class library, a beam sampling and coordinate transformation engine and dose scoring using DOSXYZnrc. The framework is validated against dose profiles and depth dose curves of single beams with varying field sizes in a water tank in units of cGy/Monitor Unit and against a 2D dose distribution of a full prostate treatment plan measured with Gafchromic EBT3 (Ashland Advanced Materials, Bridgewater, NJ) film in a homogeneous water-equivalent slab phantom. The film measurement is compared to IDC results by gamma analysis using 2% (global)/2 mm criteria. Further, the dose distribution of the clinical treatment plan in the patient CT is compared to TPS calculation by gamma analysis using the same criteria. Dose profiles from IDC calculation in a homogeneous water phantom agree within 2.3% of the global max dose or 1 mm distance to agreement to measurements for all except the smallest field size. Comparing the film measurement to calculated dose, 99.9% of all voxels pass gamma analysis, comparing dose calculated by the IDC framework to TPS calculated dose for the clinical prostate plan shows 99.0% passing rate. IDC calculated dose is found to be up to 5.6% lower than dose calculated by the TPS in this case near metal fiducial markers. An MC-based modular IDC framework was successfully developed, implemented and validated against measurements and is now available to perform patient-specific QA by IDC.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 4 2010-01-01 2010-01-01 false Sampling. 275.11 Section 275.11 Agriculture... § 275.11 Sampling. (a) Sampling plan. Each State agency shall develop a quality control sampling plan which demonstrates the integrity of its sampling procedures. (1) Content. The sampling plan shall...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 4 2011-01-01 2011-01-01 false Sampling. 275.11 Section 275.11 Agriculture... § 275.11 Sampling. (a) Sampling plan. Each State agency shall develop a quality control sampling plan which demonstrates the integrity of its sampling procedures. (1) Content. The sampling plan shall...
Peterson, Erin L; Carlson, Susan A; Schmid, Thomas L; Brown, David R; Galuska, Deborah A
2018-01-01
The purpose of this study was to examine the association between the presence of supportive community planning documents in US municipalities with design standards and requirements supportive of active living. Cross-sectional study using data from the 2014 National Survey of Community-Based Policy and Environmental Supports for Healthy Eating and Active Living. Nationally representative sample of US municipalities. Respondents are 2005 local officials. Assessed: (1) The presence of design standards and feature requirements and (2) the association between planning documents and design standards and feature requirements supportive of active living in policies for development. Using logistic regression, significant trends were identified in the presence of design standards and feature requirements by plan and number of supportive objectives present. Prevalence of design standards ranged from 19% (developer dedicated right-of-way for bicycle infrastructure development) to 50% (traffic-calming features in areas with high pedestrian and bicycle volume). Features required in policies for development ranged from 14% (short/medium pedestrian-scale block sizes) to 44% (minimum sidewalk widths of 5 feet) of municipalities. As the number of objectives in municipal plans increased, there was a significant and positive trend ( P < .05) in the prevalence of each design standard and requirement. Municipal planning documents containing objectives supportive of physical activity are associated with design standards and feature requirements supportive of activity-friendly communities.
NASA Astrophysics Data System (ADS)
Furuta, T.; Maeyama, T.; Ishikawa, K. L.; Fukunishi, N.; Fukasaku, K.; Takagi, S.; Noda, S.; Himeno, R.; Hayashi, S.
2015-08-01
In this research, we used a 135 MeV/nucleon carbon-ion beam to irradiate a biological sample composed of fresh chicken meat and bones, which was placed in front of a PAGAT gel dosimeter, and compared the measured and simulated transverse-relaxation-rate (R2) distributions in the gel dosimeter. We experimentally measured the three-dimensional R2 distribution, which records the dose induced by particles penetrating the sample, by using magnetic resonance imaging. The obtained R2 distribution reflected the heterogeneity of the biological sample. We also conducted Monte Carlo simulations using the PHITS code by reconstructing the elemental composition of the biological sample from its computed tomography images while taking into account the dependence of the gel response on the linear energy transfer. The simulation reproduced the experimental distal edge structure of the R2 distribution with an accuracy under about 2 mm, which is approximately the same as the voxel size currently used in treatment planning.
Furuta, T; Maeyama, T; Ishikawa, K L; Fukunishi, N; Fukasaku, K; Takagi, S; Noda, S; Himeno, R; Hayashi, S
2015-08-21
In this research, we used a 135 MeV/nucleon carbon-ion beam to irradiate a biological sample composed of fresh chicken meat and bones, which was placed in front of a PAGAT gel dosimeter, and compared the measured and simulated transverse-relaxation-rate (R2) distributions in the gel dosimeter. We experimentally measured the three-dimensional R2 distribution, which records the dose induced by particles penetrating the sample, by using magnetic resonance imaging. The obtained R2 distribution reflected the heterogeneity of the biological sample. We also conducted Monte Carlo simulations using the PHITS code by reconstructing the elemental composition of the biological sample from its computed tomography images while taking into account the dependence of the gel response on the linear energy transfer. The simulation reproduced the experimental distal edge structure of the R2 distribution with an accuracy under about 2 mm, which is approximately the same as the voxel size currently used in treatment planning.
Anthrax Sampling and Decontamination: Technology Trade-Offs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price, Phillip N.; Hamachi, Kristina; McWilliams, Jennifer
2008-09-12
The goal of this project was to answer the following questions concerning response to a future anthrax release (or suspected release) in a building: 1. Based on past experience, what rules of thumb can be determined concerning: (a) the amount of sampling that may be needed to determine the extent of contamination within a given building; (b) what portions of a building should be sampled; (c) the cost per square foot to decontaminate a given type of building using a given method; (d) the time required to prepare for, and perform, decontamination; (e) the effectiveness of a given decontamination methodmore » in a given type of building? 2. Based on past experience, what resources will be spent on evaluating the extent of contamination, performing decontamination, and assessing the effectiveness of the decontamination in abuilding of a given type and size? 3. What are the trade-offs between cost, time, and effectiveness for the various sampling plans, sampling methods, and decontamination methods that have been used in the past?« less
NASA Astrophysics Data System (ADS)
Moores, John E.; Francis, Raymond; Mader, Marianne; Osinski, G. R.; Barfoot, T.; Barry, N.; Basic, G.; Battler, M.; Beauchamp, M.; Blain, S.; Bondy, M.; Capitan, R.-D.; Chanou, A.; Clayton, J.; Cloutis, E.; Daly, M.; Dickinson, C.; Dong, H.; Flemming, R.; Furgale, P.; Gammel, J.; Gharfoor, N.; Hussein, M.; Grieve, R.; Henrys, H.; Jaziobedski, P.; Lambert, A.; Leung, K.; Marion, C.; McCullough, E.; McManus, C.; Neish, C. D.; Ng, H. K.; Ozaruk, A.; Pickersgill, A.; Preston, L. J.; Redman, D.; Sapers, H.; Shankar, B.; Singleton, A.; Souders, K.; Stenning, B.; Stooke, P.; Sylvester, P.; Tornabene, L.
2012-12-01
A Mission Control Architecture is presented for a Robotic Lunar Sample Return Mission which builds upon the experience of the landed missions of the NASA Mars Exploration Program. This architecture consists of four separate processes working in parallel at Mission Control and achieving buy-in for plans sequentially instead of simultaneously from all members of the team. These four processes were: science processing, science interpretation, planning and mission evaluation. science processing was responsible for creating products from data downlinked from the field and is organized by instrument. Science Interpretation was responsible for determining whether or not science goals are being met and what measurements need to be taken to satisfy these goals. The Planning process, responsible for scheduling and sequencing observations, and the Evaluation process that fostered inter-process communications, reporting and documentation assisted these processes. This organization is advantageous for its flexibility as shown by the ability of the structure to produce plans for the rover every two hours, for the rapidity with which Mission Control team members may be trained and for the relatively small size of each individual team. This architecture was tested in an analogue mission to the Sudbury impact structure from June 6-17, 2011. A rover was used which was capable of developing a network of locations that could be revisited using a teach and repeat method. This allowed the science team to process several different outcrops in parallel, downselecting at each stage to ensure that the samples selected for caching were the most representative of the site. Over the course of 10 days, 18 rock samples were collected from 5 different outcrops, 182 individual field activities - such as roving or acquiring an image mosaic or other data product - were completed within 43 command cycles, and the rover travelled over 2200 m. Data transfer from communications passes were filled to 74%. Sample triage was simulated to allow down-selection to 1 kg of material for return to Earth.
NASA Technical Reports Server (NTRS)
Lal, R. B.
1992-01-01
Preliminary evaluation of the data was made during the hologram processing procedure. A few representative holograms were selected and reconstructed in the HGS; photographs of sample particle images were made to illustrate the resolution of all three particle sizes. Based on these evaluations slight modifications were requested in the hologram processing procedure to optimize the hologram exposure in the vicinity of the crystal. Preliminary looks at the data showed that we are able to see and track all three sizes of particles throughout the chamber. Because of the vast amount of data available in the holograms, it was recommended that we produce a detailed data reduction plan with prioritization on the different types of data which can be extracted from the holograms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palma, B; Bazalova, M; Qu, B
Purpose: We evaluated the effect of very high-energy electron (VHEE) beam parameters on the planning of a lung cancer case by means of Monte Carlo simulations. Methods: We simulated VHEE radiotherapy plans using the EGSnrc/BEAMnrc-DOSXYZnrc code. We selected a lung cancer case that was treated with 6MV photon VMAT to be planned with VHEE. We studied the effect of beam energy (80 MeV, 100 MeV, and 120 MeV), number of equidistant beams (16 or 32), and beamlets sizes (3 mm, 5 mm or 7 mm) on PTV coverage, sparing of organs at risk (OARs) and dose conformity. Inverse-planning optimization wasmore » performed in a research version of RayStation (RaySearch Laboratories AB) using identical objective functions and constraints for all VHEE plans. Results: Similar PTV coverage and dose conformity was achieved by all the VHEE plans. The 100 MeV and 120 MeV VHEE plans were equivalent amongst them and were superior to the 80 MeV plan in terms of OARs sparing. The effect of using 16 or 32 equidistant beams was a mean difference in average dose of 2.4% (0%–7.7%) between the two plans. The use of 3 mm beamlet size systematically reduced the dose to all the OARs. Based on these results we selected the 100MeV-16beams-3mm-beamlet-size plan to compare it against VMAT. The selected VHEE plan was more conformal than VMAT and improved OAR sparing (heart and trachea received 125% and 177% lower dose, respectively) especially in the low-dose region. Conclusion: We determined the VHEE beam parameters that maximized the OAR dose sparing and dose conformity of the actually delivered VMAT plan of a lung cancer case. The selected parameters could be used for the planning of other treatment sites with similar size, shape, and location. For larger targets, a larger beamlet size might be used without significantly increasing the dose. B Palma: None. M Bazalova: None. B Hardemark: Employee, RaySearch Americas. E Hynning: Employee, RaySearch Americas. B Qu: None. B Loo Jr.: Research support, RaySearch, Varian. P Maxim: Research support, RaySearch, Varian.« less
Urban land-use study plan for the National Water-Quality Assessment Program
Squillace, P.J.; Price, C.V.
1996-01-01
This study plan is for Urban Land-Use Studies initiated as part of the U.S. Geological Survey's National Water-Quality Assessment (NAWQA) Program. There are two Urban Land-Use Study objectives: (1) Define the water quality in recharge areas of shallow aquifers underlying areas of new residential and commercial land use in large metropolitan areas, and (2) determine which natural and human factors most strongly affect the occurrence of contaminants in these shallow aquifers. To meet objective 1, each NAWQA Study Unit will install and collect water samples from at least 30 randomly located monitoring wells in a metropolitan area. To meet objective 2, aquifer characteristics and land-use information will be documented. This includes particle-size analysis of each major lithologic unit both in the unsaturated zone and in the aquifer near the water table. The percentage of organic carbon also will be determined for each lithologic unit. Geographic information system coverages will be created that document existing land use around the wells. These data will aid NAWQA personnel in relating natural and human factors to the occurrence of contaminants. Water samples for age dating also will be collected from all monitoring wells, but the samples will be stored until the occurrence of contaminants has been determined. Age-date analysis will be done only on those samples that have no detectable concentrations of anthropogenic contaminants.
Planning for the Impacts of Highway Relief Routes on Small- and Medium-Size Communities
DOT National Transportation Integrated Search
2001-03-01
This report explores possible strategies for minimizing the negative impacts and maximizing the positive impacts of highway relief routes on small- and medium-size communities in Texas. Planning strategies are identified through a : literature search...
Barnard, Neal D; Levin, Susan M; Gloede, Lise; Flores, Rosendo
2018-06-01
In research settings, plant-based (vegan) eating plans improve diabetes management, typically reducing weight, glycemia, and low-density lipoprotein (LDL) cholesterol concentrations to a greater extent than has been shown with portion-controlled eating plans. The study aimed to test whether similar benefits could be found using weekly nutrition classes in a typical endocrinology practice, hypothesizing that a vegan eating plan would improve glycemic control, weight, lipid concentrations, blood pressure, and renal function and would do so more effectively than a portion-controlled eating plan. In a 20-week trial, participants were randomly assigned to a low-fat vegan or portion-controlled eating plan. Individuals with type 2 diabetes treated in a single endocrinology practice in Washington, DC, participated (45 starters, 40 completers). Participants attended weekly after-hours classes in the office waiting room. The vegan plan excluded animal products and added oils and favored low-glycemic index foods. The portion-controlled plan included energy intake limits for weight loss (typically a deficit of 500 calories/day) and provided guidance on portion sizes. Body weight, hemoglobin A1c (HbA1c), plasma lipids, urinary albumin, and blood pressure were measured. For normally distributed data, t tests were used; for skewed outcomes, rank-based approaches were implemented (Wilcoxon signed-rank test for within-group changes, Wilcoxon two-sample test for between-group comparisons, and exact Hodges-Lehmann estimation to estimate effect sizes). Although participants were in generally good metabolic control at baseline, body weight, HbA1c, and LDL cholesterol improved significantly within each group, with no significant differences between the two eating plans (weight: -6.3 kg vegan, -4.4 kg portion-controlled, between-group P=0.10; HbA1c, -0.40 percentage point in both groups, P=0.68; LDL cholesterol -11.9 mg/dL vegan, -12.7 mg/dL portion-controlled, P=0.89). Mean urinary albumin was normal at baseline and did not meaningfully change. Blood pressure changes were not significant. Weekly classes, integrated into a clinical practice and using either a low-fat vegan or portion-controlled eating plan, led to clinical improvements in individuals with type 2 diabetes. Copyright © 2018 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Jan, Show-Li; Shieh, Gwowen
2016-08-31
The 2 × 2 factorial design is widely used for assessing the existence of interaction and the extent of generalizability of two factors where each factor had only two levels. Accordingly, research problems associated with the main effects and interaction effects can be analyzed with the selected linear contrasts. To correct for the potential heterogeneity of variance structure, the Welch-Satterthwaite test is commonly used as an alternative to the t test for detecting the substantive significance of a linear combination of mean effects. This study concerns the optimal allocation of group sizes for the Welch-Satterthwaite test in order to minimize the total cost while maintaining adequate power. The existing method suggests that the optimal ratio of sample sizes is proportional to the ratio of the population standard deviations divided by the square root of the ratio of the unit sampling costs. Instead, a systematic approach using optimization technique and screening search is presented to find the optimal solution. Numerical assessments revealed that the current allocation scheme generally does not give the optimal solution. Alternatively, the suggested approaches to power and sample size calculations give accurate and superior results under various treatment and cost configurations. The proposed approach improves upon the current method in both its methodological soundness and overall performance. Supplementary algorithms are also developed to aid the usefulness and implementation of the recommended technique in planning 2 × 2 factorial designs.
Pérez-Rodríguez, J; Martínez-Blay, V; Soto, A; Selfa, J; Monzó, C; Urbaneja, A; Tena, A
2017-12-05
Delottococcus aberiae De Lotto (Hemiptera: Pseudococcidae) is the latest exotic mealybug species introduced in citrus in the Mediterranean basin. It causes severe distortion and size reduction on developing fruits. Due to its first interaction with citrus, D. aberiae economic thresholds are still unknown for this crop and the current Integrated Pest Management programs have been disrupted. The objectives of this study were to determine the aggregation patterns of D. aberiae in citrus, develop an efficient sampling plan to assess its population density, and calculate its Economic and Economic Environmental Injury Levels (EIL and EEIL, respectively). Twelve and 19 orchards were sampled in 2014 and 2015, respectively. At each orchard, population densities were monitored fortnightly in leaves, twigs, and fruit, and fruit damage was determined at harvest. Our results showed a clumped aggregation of D. aberiae in all organs with no significant differences between generations on fruit. Fruit damage at harvest was strongly correlated with fruit occupation in spring. Based on these results and using chlorpyrifos as the insecticide of reference, the EIL and EEIL were calculated as 7.1 and 12.1% of occupied fruit in spring, respectively. With all this, we recommend sampling 275 fruits using a binomial sampling method or alternatively, 140 fruits with an enumerative method bimonthly between petal fall and July. © The Author(s) 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Lu, Siqi; Wang, Xiaorong; Wu, Junyong
2018-01-01
The paper presents a method to generate the planning scenarios, which is based on K-means clustering analysis algorithm driven by data, for the location and size planning of distributed photovoltaic (PV) units in the network. Taken the power losses of the network, the installation and maintenance costs of distributed PV, the profit of distributed PV and the voltage offset as objectives and the locations and sizes of distributed PV as decision variables, Pareto optimal front is obtained through the self-adaptive genetic algorithm (GA) and solutions are ranked by a method called technique for order preference by similarity to an ideal solution (TOPSIS). Finally, select the planning schemes at the top of the ranking list based on different planning emphasis after the analysis in detail. The proposed method is applied to a 10-kV distribution network in Gansu Province, China and the results are discussed.
Feil, A; Thoden van Velzen, E U; Jansen, M; Vitz, P; Go, N; Pretz, T
2016-02-01
The recovery of beverage cartons (BC) in three lightweight packaging waste processing plants (LP) was analyzed with different input materials and input masses in the area of 21-50Mg. The data was generated by gravimetric determination of the sorting products, sampling and sorting analysis. Since the particle size of beverage cartons is larger than 120mm, a modified sampling plan was implemented and targeted multiple sampling (3-11 individual samplings) and a total sample size of respectively 1200l (ca. 60kg) for the BC-products and of about 2400l (ca. 120kg) for material-heterogeneous mixed plastics (MP) and sorting residue products. The results infer that the quantification of the beverage carton yield in the process, i.e., by including all product-containing material streams, can be specified only with considerable fluctuation ranges. Consequently, the total assessment, regarding all product streams, is rather qualitative than quantitative. Irregular operation conditions as well as unfavorable sampling conditions and capacity overloads are likely causes for high confidence intervals. From the results of the current study, recommendations can basically be derived for a better sampling in LP-processing plants. Despite of the suboptimal statistical results, the results indicate very clear that the plants show definite optimisation potentials with regard to the yield of beverage cartons as well as the required product purity. Due to the test character of the sorting trials the plant parameterization was not ideal for this sorting task and consequently the results should be interpreted with care. Copyright © 2015 Elsevier Ltd. All rights reserved.
Real-time inverse planning for Gamma Knife radiosurgery.
Wu, Q Jackie; Chankong, Vira; Jitprapaikulsarn, Suradet; Wessels, Barry W; Einstein, Douglas B; Mathayomchan, Boonyanit; Kinsella, Timothy J
2003-11-01
The challenges of real-time Gamma Knife inverse planning are the large number of variables involved and the unknown search space a priori. With limited collimator sizes, shots have to be heavily overlapped to form a smooth prescription isodose line that conforms to the irregular target shape. Such overlaps greatly influence the total number of shots per plan, making pre-determination of the total number of shots impractical. However, this total number of shots usually defines the search space, a pre-requisite for most of the optimization methods. Since each shot only covers part of the target, a collection of shots in different locations and various collimator sizes selected makes up the global dose distribution that conforms to the target. Hence, planning or placing these shots is a combinatorial optimization process that is computationally expensive by nature. We have previously developed a theory of shot placement and optimization based on skeletonization. The real-time inverse planning process, reported in this paper, is an expansion and the clinical implementation of this theory. The complete planning process consists of two steps. The first step is to determine an optimal number of shots including locations and sizes and to assign initial collimator size to each of the shots. The second step is to fine-tune the weights using a linear-programming technique. The objective function is to minimize the total dose to the target boundary (i.e., maximize the dose conformity). Results of an ellipsoid test target and ten clinical cases are presented. The clinical cases are also compared with physician's manual plans. The target coverage is more than 99% for manual plans and 97% for all the inverse plans. The RTOG PITV conformity indices for the manual plans are between 1.16 and 3.46, compared to 1.36 to 2.4 for the inverse plans. All the inverse plans are generated in less than 2 min, making real-time inverse planning a reality.
1989-10-01
Coord: Elev.PROTECTIVE CSG He~qht I Material /Type El e~. ________Diameter Elie.hI____ Depth BGS ___________Weep Hole (YIN) HSe.ht __ GUARD POSTS (YIN...Tremied I Y/14) SCR EE N Type Dimeter - --Slot Size Ck Type SUMP (YIN) Interval OGS Lcngth - 90tt0m Cop (YIN) BACKFILL PLUG Material ...for transport to the laboratory. The remaining liners will then be extruded and the material used for lithologic description and other analyses. The
Concept Study For A Near-term Mars Surface Sample Return Mission
NASA Astrophysics Data System (ADS)
Smith, M. F.; Thatcher, J.; Sallaberger, C.; Reedman, T.; Pillinger, C. T.; Sims, M. R.
The return of samples from the surface of Mars is a challenging problem. Present mission planning is for complex missions to return large, focused samples sometime in the next decade. There is, however, much scientific merit in returning a small sample of Martian regolith before the end of this decade at a fraction of the cost of the more ambitious missions. This paper sets out the key elements of this concept that builds on the work of the Beagle 2 project and space robotics work in Canada. The paper will expand the science case for returning a regolith sample that is only in the range of 50-250g but would nevertheless include plenty of interesting mate- rial as the regolith comprises soil grains from a wide variety of locations i.e. nearby rocks, sedimentary formations and materials moved by fluids, winds and impacts. It is possible that a fine core sample could also be extracted and returned. The mission concept is to send a lander sized at around 130kg on the 2007 or 2009 opportunity, immediately collect the sample from the surface, launch it to Mars orbit, collect it by the lander parent craft and make an immediate Earth return. Return to Earth orbit is envisaged rather than direct Earth re-entry. The lander concept is essen- tially a twice-size Beagle 2 carrying the sample collection and return capsule loading equipment plus the ascent vehicle. The return capsule is envisaged as no more than 1kg. An overall description of the mission along with methods for sample acquisition, or- bital rendezvous and capsule return will be outlined and the overall systems budgets presented. To demonstrate the near term feasibility of the mission, the use of existing Canadian and European technologies will be highlighted.
Naltrexone and Cognitive Behavioral Therapy for the Treatment of Alcohol Dependence
Baros, AM; Latham, PK; Anton, RF
2008-01-01
Background Sex differences in regards to pharmacotherapy for alcoholism is a topic of concern following publications suggesting naltrexone, one of the longest approved treatments of alcoholism, is not as effective in women as in men. This study was conducted by combining two randomized placebo controlled clinical trials utilizing similar methodologies and personnel in which the data was amalgamated to evaluate sex effects in a reasonable sized sample. Methods 211 alcoholics (57 female; 154 male) were randomized to the naltrexone/CBT or placebo/CBT arm of the two clinical trials analyzed. Baseline variables were examined for differences between sex and treatment groups via analysis of variance (ANOVA) for continuous variable or chi-square test for categorical variables. All initial outcome analysis was conducted under an intent-to-treat analysis plan. Effect sizes for naltrexone over placebo were determined by Cohen’s D (d). Results The effect size of naltrexone over placebo for the following outcome variables was similar in men and women (%days abstinent (PDA) d=0.36, %heavy drinking days (PHDD) d=0.36 and total standard drinks (TSD) d=0.36). Only for men were the differences significant secondary to the larger sample size (PDA p=0.03; PHDD p=0.03; TSD p=0.04). There were a few variables (GGT at wk-12 change from baseline to week-12: men d=0.36, p=0.05; women d=0.20, p=0.45 and drinks per drinking day: men d=0.36, p=0.05; women d=0.28, p=0.34) where the naltrexone effect size for men was greater than women. In women, naltrexone tended to increase continuous abstinent days before a first drink (women d-0.46, p=0.09; men d=0.00, p=0.44). Conclusions The effect size of naltrexone over placebo appeared similar in women and men in our hands suggesting the findings of sex differences in naltrexone response might have to do with sample size and/or endpoint drinking variables rather than any inherent pharmacological or biological differences in response. PMID:18336635
SU-E-J-81: Adaptive Radiotherapy for IMRT Head & Neck Patient in AKUH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yousuf, A; Qureshi, B; Qadir, A
2015-06-15
Purpose: In this study we proposed Adaptive radiotherapy for IMRT patients which will brought an additional dimension to the management of patients with H&N cancer in Aga Khan University Hospital. Methods: In this study 5 Head and Neck (H&N) patients plan where selected, who’s Re-CT were done during the course of their treatment, they were simulated with IMRT technique to learn the consequence of anatomical changes that may occur during the treatment, as they are more dramatic changes can occur as compare to conventional treatment. All the organ at risk were drawn according RTOG guidelines and doses were checked asmore » per NCCN guidelines. Results: The reduction in size of Planning target volume (PTV) is more than 20% in all the cases which leads to 3 to 5 % overdose to normal tissues and Organ at Risk. Conclusion: Through this study we would like to emphasis the importance of Adaptive Radiotherapy practice in all IMRT (H&N) patients, although prospective studies are required with larger sample sizes to address the safety and the clinical effect of such approaches on patient outcome, also one need to develop protocols before implementation of this technique in practice.« less
Willingness to participate in accountable care organizations: health care managers' perspective.
Wan, Thomas T H; Demachkie Masri, Maysoun; Ortiz, Judith; Lin, Blossom Y J
2014-01-01
This study examines how health care managers responded to the accountable care organization (ACO). The effect of perceived benefits and barriers of the commitment to develop a strategic plan for ACOs and willingness to participate in ACOs is analyzed, using organizational social capital, health information technology uses, health systems integration and size of the health networks, geographic factors, and knowledge about ACOs as predictors. Propensity score matching and analysis are used to adjust the state and regional variations. When the number of perceived benefits is greater than the number of perceived barriers, health care managers are more likely to reveal a stronger commitment to develop a strategic plan for ACO adoption. Health care managers who perceived their organizations as lacking leadership support or commitment, financial incentives, and legal and regulatory support to ACO adoption were less willing to participate in ACOs in the future. Future research should gather more diverse views from a larger sample size of health professionals regarding ACO participation. The perspective of health care managers should be seriously considered in the adoption of an innovative health care delivery system. The transparency on policy formulation should consider multiple views of health care managers.
Evolving directions in NASA's planetary rover requirements and technology
NASA Technical Reports Server (NTRS)
Weisbin, C. R.; Montemerlo, Mel; Whittaker, W.
1993-01-01
The evolution of NASA's planning for planetary rovers (that is robotic vehicles which may be deployed on planetary bodies for exploration, science analysis, and construction) and some of the technology that was developed to achieve the desired capabilities is reviewed. The program is comprised of a variety of vehicle sizes and types in order to accommodate a range of potential user needs. This includes vehicles whose weight spans a few kilograms to several thousand kilograms; whose locomotion is implemented using wheels, tracks, and legs; and whose payloads vary from microinstruments to large scale assemblies for construction. Robotic vehicles and their associated control systems, developed in the late 1980's as part of a proposed Mars Rover Sample Return (MRSR) mission, are described. Goals suggested at the time for such a MRSR mission included navigating for one to two years across hundreds of kilometers of Martian surface; traversing a diversity of rugged, unknown terrain; collecting and analyzing a variety of samples; and bringing back selected samples to the lander for return to Earth. Current plans (considerably more modest) which have evolved both from technological 'lessons learned' in the previous period, and modified aspirations of NASA missions are presented. Some of the demonstrated capabilities of the developed machines and the technologies which made these capabilities possible are described.
Bergh, Daniel
2015-01-01
Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.
Estimating population trends with a linear model
Bart, Jonathan; Collins, Brian D.; Morrison, R.I.G.
2003-01-01
We describe a simple and robust method for estimating trends in population size. The method may be used with Breeding Bird Survey data, aerial surveys, point counts, or any other program of repeated surveys at permanent locations. Surveys need not be made at each location during each survey period. The method differs from most existing methods in being design based, rather than model based. The only assumptions are that the nominal sampling plan is followed and that sample size is large enough for use of the t-distribution. Simulations based on two bird data sets from natural populations showed that the point estimate produced by the linear model was essentially unbiased even when counts varied substantially and 25% of the complete data set was missing. The estimating-equation approach, often used to analyze Breeding Bird Survey data, performed similarly on one data set but had substantial bias on the second data set, in which counts were highly variable. The advantages of the linear model are its simplicity, flexibility, and that it is self-weighting. A user-friendly computer program to carry out the calculations is available from the senior author.
Bayes factor design analysis: Planning for compelling evidence.
Schönbrodt, Felix D; Wagenmakers, Eric-Jan
2018-02-01
A sizeable literature exists on the use of frequentist power analysis in the null-hypothesis significance testing (NHST) paradigm to facilitate the design of informative experiments. In contrast, there is almost no literature that discusses the design of experiments when Bayes factors (BFs) are used as a measure of evidence. Here we explore Bayes Factor Design Analysis (BFDA) as a useful tool to design studies for maximum efficiency and informativeness. We elaborate on three possible BF designs, (a) a fixed-n design, (b) an open-ended Sequential Bayes Factor (SBF) design, where researchers can test after each participant and can stop data collection whenever there is strong evidence for either [Formula: see text] or [Formula: see text], and (c) a modified SBF design that defines a maximal sample size where data collection is stopped regardless of the current state of evidence. We demonstrate how the properties of each design (i.e., expected strength of evidence, expected sample size, expected probability of misleading evidence, expected probability of weak evidence) can be evaluated using Monte Carlo simulations and equip researchers with the necessary information to compute their own Bayesian design analyses.
Tretiach, M; Pittao, E; Crisafulli, P; Adamo, P
2011-01-15
The hypothesis that exposure environment and land use influence element accumulation and particulate size composition in transplants of Hypnum cupressiforme has been tested using moss-bags containing oven-devitalized material. The samples were exposed for three months in ten green sites and ten roadsides in two areas with different land use (A, residential; B, residential/industrial) in the Trieste conurbation (NE Italy). Observations by SEM and EDX-ray microanalysis revealed that particle density was smaller in samples exposed in A than in B, with prevalence of particles containing Al, Ca, Fe and Si, and in good accordance with the element contents measured by acid digestion and ICP-MS. Moss-bags were generally less contaminated in green sites than in roadsides, apparently due to the different enrichment in coarse particles. In both environments, however, the majority of entrapped particles (up to 98.2%) belongs in the inhalable, small size classes (≤PM(10)). The need for careful selection of the exposure sites during the phase of biomonitoring planning is discussed. Copyright © 2010 Elsevier B.V. All rights reserved.
Statistical issues in quality control of proteomic analyses: good experimental design and planning.
Cairns, David A
2011-03-01
Quality control is becoming increasingly important in proteomic investigations as experiments become more multivariate and quantitative. Quality control applies to all stages of an investigation and statistics can play a key role. In this review, the role of statistical ideas in the design and planning of an investigation is described. This involves the design of unbiased experiments using key concepts from statistical experimental design, the understanding of the biological and analytical variation in a system using variance components analysis and the determination of a required sample size to perform a statistically powerful investigation. These concepts are described through simple examples and an example data set from a 2-D DIGE pilot experiment. Each of these concepts can prove useful in producing better and more reproducible data. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Physicians' perceptions of an eczema action plan for atopic dermatitis.
Ntuen, Edidiong; Taylor, Sarah L; Kinney, Megan; O'Neill, Jenna L; Krowchuk, Daniel P; Feldman, Steven R
2010-01-01
Poor adherence to topical medications in atopic dermatitis may lead to exposure to more costly and potentially toxic systemic agents. Written action plans (WAPs) improve adherence and treatment outcomes in asthma patients and may be useful for children with atopic dermatitis. To assess physicians' perceptions of a WAP for atopic dermatitis and their openness to using it. An Eczema Action Plan (EAP) was modeled from those used in pediatric asthma. A brief survey to assess the perceived practicality and usefulness of the EAP was sent to 48 pediatricians in our local area and to 17 pediatric dermatologists nationally. Survey items included layout, graphics, readability, accuracy, and utility. Qualitative analyses were performed due to small sample sizes. Seventeen pediatricians from five community practices and eight pediatric dermatologists responded (response rates of 35% and 41%, respectively). Layout was rated as excellent by 59% of pediatricians and 43% of pediatric dermatologists, the graphics were rated good (60% and 70%), the readability as good to excellent (100% and 86%), the accuracy as excellent or good (83% and 86%), and usefulness as good to excellent (100% of both groups). Most (71%) of the pediatric dermatologists reported already having their own patient education materials for atopic dermatitis, but none of the pediatricians did. All pediatricians and 60% of pediatric dermatologists reported they were likely to use the EAP in their clinical practices. Limitations included the sample size being small, but it still provided for qualitative assessment of generalists and sub-specialists. We did not assess how the EAP would be perceived by patients or their families. The practice settings of the community and academic physicians are not identical, which may make for weakened comparisons. Pediatricians are open to using an EAP for atopic dermatitis. If an EAP were effective at improving adherence and outcomes in atopic dermatitis, widespread implementation should be feasible.
sGD: software for estimating spatially explicit indices of genetic diversity.
Shirk, A J; Cushman, S A
2011-09-01
Anthropogenic landscape changes have greatly reduced the population size, range and migration rates of many terrestrial species. The small local effective population size of remnant populations favours loss of genetic diversity leading to reduced fitness and adaptive potential, and thus ultimately greater extinction risk. Accurately quantifying genetic diversity is therefore crucial to assessing the viability of small populations. Diversity indices are typically calculated from the multilocus genotypes of all individuals sampled within discretely defined habitat patches or larger regional extents. Importantly, discrete population approaches do not capture the clinal nature of populations genetically isolated by distance or landscape resistance. Here, we introduce spatial Genetic Diversity (sGD), a new spatially explicit tool to estimate genetic diversity based on grouping individuals into potentially overlapping genetic neighbourhoods that match the population structure, whether discrete or clinal. We compared the estimates and patterns of genetic diversity using patch or regional sampling and sGD on both simulated and empirical populations. When the population did not meet the assumptions of an island model, we found that patch and regional sampling generally overestimated local heterozygosity, inbreeding and allelic diversity. Moreover, sGD revealed fine-scale spatial heterogeneity in genetic diversity that was not evident with patch or regional sampling. These advantages should provide a more robust means to evaluate the potential for genetic factors to influence the viability of clinal populations and guide appropriate conservation plans. © 2011 Blackwell Publishing Ltd.
Detection and characterization of atypical capripoxviruses among small ruminants in India.
Santhamani, Ramasamy; Venkatesan, Gnanavel; Minhas, Sanjeevna Kumari; Shivachandra, Sathish Bhadravati; Muthuchelvan, Dhanavelu; Pandey, Awadh Bihari; Ramakrishnan, Muthannan Andavar
2015-08-01
Recent developments in molecular biology shed light on cross-species transmission of SPPV and GTPV. The present study was planned to characterize the capripoxviruses which were circulating in the field condition among sheep and goats using RPO30 gene-based viral lineage (SPPV/GTPV) differentiating PCR and sequencing of RPO30 and GPCR genes from clinical samples. Out of 58 scabs (35 sheep and 23 goats) screened, 27 sheep and 18 goat scabs were found positive for capripox virus infections. With the exception of one sheep and one goat scabs, all the positive samples yielded amplicon size according to host origin, i.e. SPPV in sheep and GTPV in goats. In the above two exceptional cases, goat scab and sheep scab yielded amplicon size as that of SPPV and GTPV, respectively. Further, sequencing and phylogenetic analyses of complete ORFs of RPO30 and GPCR genes from six sheep and three goat scabs revealed that with the exception of above two samples, all had host-specific signatures and clustered according to their host origin. In case of cross-species infecting samples, sheep scab possessed GTPV-like signatures and goat scab possessed SPPV-like signatures. Our study identifies the circulation of cross-infecting SPPV and GTPV in the field and warrants the development of single-strain vaccine which can protect the animals from both sheeppox and goatpox diseases.
Long-Term Ecological Monitoring Field Sampling Plan for 2007
DOE Office of Scientific and Technical Information (OSTI.GOV)
T. Haney
2007-07-31
This field sampling plan describes the field investigations planned for the Long-Term Ecological Monitoring Project at the Idaho National Laboratory Site in 2007. This plan and the Quality Assurance Project Plan for Waste Area Groups 1, 2, 3, 4, 5, 6, 7, 10, and Removal Actions constitute the sampling and analysis plan supporting long-term ecological monitoring sampling in 2007. The data collected under this plan will become part of the long-term ecological monitoring data set that is being collected annually. The data will be used t determine the requirements for the subsequent long-term ecological monitoring. This plan guides the 2007more » investigations, including sampling, quality assurance, quality control, analytical procedures, and data management. As such, this plan will help to ensure that the resulting monitoring data will be scientifically valid, defensible, and of known and acceptable quality.« less
2010-01-01
Background To determine whether training of providers participating in franchise clinic networks is associated with increased Family Planning service use among low-income urban families in Pakistan. Methods The study uses 2001 survey data consisting of interviews with 1113 clinical and non-clinical providers working in public and private hospitals/clinics. Data analysis excludes non-clinical providers reducing sample size to 822. Variables for the analysis are divided into client volume, and training in family planning. Regression models are used to compute the association between training and service use in franchise versus private non-franchise clinics. Results In franchise clinic networks, staff are 6.5 times more likely to receive family planning training (P = 0.00) relative to private non-franchises. Service use was significantly associated with training (P = 0.00), franchise affiliation (P = 0.01), providers' years of family planning experience (P = 0.02) and the number of trained staff working at government owned clinics (P = 0.00). In this setting, nurses are significantly less likely to receive training compared to doctors (P = 0.00). Conclusions These findings suggest that franchises recruit and train various cadres of health workers and training maybe associated with increased service use through improvement in quality of services. PMID:21062460
Qureshi, Asma M
2010-11-09
To determine whether training of providers participating in franchise clinic networks is associated with increased Family Planning service use among low-income urban families in Pakistan. The study uses 2001 survey data consisting of interviews with 1113 clinical and non-clinical providers working in public and private hospitals/clinics. Data analysis excludes non-clinical providers reducing sample size to 822. Variables for the analysis are divided into client volume, and training in family planning. Regression models are used to compute the association between training and service use in franchise versus private non-franchise clinics. In franchise clinic networks, staff are 6.5 times more likely to receive family planning training (P = 0.00) relative to private non-franchises. Service use was significantly associated with training (P = 0.00), franchise affiliation (P = 0.01), providers' years of family planning experience (P = 0.02) and the number of trained staff working at government owned clinics (P = 0.00). In this setting, nurses are significantly less likely to receive training compared to doctors (P = 0.00). These findings suggest that franchises recruit and train various cadres of health workers and training maybe associated with increased service use through improvement in quality of services.
Capello, Katia; Bortolotti, Laura; Lanari, Manuela; Baioni, Elisa; Mutinelli, Franco; Vascellari, Marta
2015-01-01
The knowledge of the size and demographic structure of animal populations is a necessary prerequisite for any population-based epidemiological study, especially to ascertain and interpret prevalence data, to implement surveillance plans in controlling zoonotic diseases and, moreover, to provide accurate estimates of tumours incidence data obtained by population-based registries. The main purpose of this study was to provide an accurate estimate of the size and structure of the canine population in Veneto region (north-eastern Italy), using the Lincoln-Petersen version of the capture-recapture methodology. The Regional Canine Demographic Registry (BAC) and a sample survey of households of Veneto Region were the capture and recapture sources, respectively. The secondary purpose was to estimate the size and structure of the feline population in the same region, using the same survey applied for dog population. A sample of 2465 randomly selected households was drawn and submitted to a questionnaire using the CATI technique, in order to obtain information about the ownership of dogs and cats. If the dog was declared to be identified, owner's information was used to recapture the dog in the BAC. The study was conducted in Veneto Region during 2011, when the dog population recorded in the BAC was 605,537. Overall, 616 households declared to possess at least one dog (25%), with a total of 805 dogs and an average per household of 1.3. The capture-recapture analysis showed that 574 dogs (71.3%, 95% CI: 68.04-74.40%) had been recaptured in both sources, providing a dog population estimate of 849,229 (95% CI: 814,747-889,394), 40% higher than that registered in the BAC. Concerning cats, 455 of 2465 (18%, 95% CI: 17-20%) households declared to possess at least one cat at the time of the telephone interview, with a total of 816 cats. The mean number of cats per household was equal to 1.8, providing an estimate of the cat population in Veneto region equal to 663,433 (95% CI: 626,585-737,159). The estimate of the size and structure of owned canine and feline populations in Veneto region provide useful data to perform epidemiological studies and monitoring plans in this area. Copyright © 2014 Elsevier B.V. All rights reserved.
Noyes, Julie A; Thomovsky, Stephanie A; Chen, Annie V; Owen, Tina J; Fransson, Boel A; Carbonneau, Kira J; Matthew, Susan M
2017-10-01
To determine the influence of preoperative computed tomography (CT) versus magnetic resonance (MR) on hemilaminectomies planned to treat thoracolumbar (TL) intervertebral disc (IVD) extrusions in chondrodystrophic dogs. Prospective clinical study. Forty chondrodystrophic dogs with TL IVD extrusion and preoperative CT and MR studies. MR and CT images were randomized and reviewed by 4 observers masked to the dog's identity and corresponding imaging studies. Observers planned the location along the spine, side, and extent (number of articular facets to be removed) based on individual reviews of CT and MR studies. Intra-observer agreement was determined between overall surgical plan, location, side, and size of the hemilaminectomy planned on CT versus MR of the same dog. Similar surgical plans were developed based on MR versus CT in 43.5%-66.6% of dogs, depending on the observer. Intra-observer agreement in location, side, and size of the planned hemilaminectomy based on CT versus MR ranged between 48.7%-66.6%, 87%-92%, and 51.2%-71.7% of dogs, respectively. Observers tended to plan larger laminectomy defects based on MR versus CT of the same dog. Findings from this study indicated considerable differences in hemilaminectomies planned on preoperative MR versus CT imaging. Surgical location and size varied the most; the side of planned hemilaminectomies was most consistent between imaging modalities. © 2017 The American College of Veterinary Surgeons.
Evaluating information content of SNPs for sample-tagging in re-sequencing projects.
Hu, Hao; Liu, Xiang; Jin, Wenfei; Hilger Ropers, H; Wienker, Thomas F
2015-05-15
Sample-tagging is designed for identification of accidental sample mix-up, which is a major issue in re-sequencing studies. In this work, we develop a model to measure the information content of SNPs, so that we can optimize a panel of SNPs that approach the maximal information for discrimination. The analysis shows that as low as 60 optimized SNPs can differentiate the individuals in a population as large as the present world, and only 30 optimized SNPs are in practice sufficient in labeling up to 100 thousand individuals. In the simulated populations of 100 thousand individuals, the average Hamming distances, generated by the optimized set of 30 SNPs are larger than 18, and the duality frequency, is lower than 1 in 10 thousand. This strategy of sample discrimination is proved robust in large sample size and different datasets. The optimized sets of SNPs are designed for Whole Exome Sequencing, and a program is provided for SNP selection, allowing for customized SNP numbers and interested genes. The sample-tagging plan based on this framework will improve re-sequencing projects in terms of reliability and cost-effectiveness.
Pezzoli, L; Tchio, R; Dzossa, A D; Ndjomo, S; Takeu, A; Anya, B; Ticha, J; Ronveaux, O; Lewis, R F
2012-01-01
We used the clustered lot quality assurance sampling (clustered-LQAS) technique to identify districts with low immunization coverage and guide mop-up actions during the last 4 days of a combined oral polio vaccine (OPV) and yellow fever (YF) vaccination campaign conducted in Cameroon in May 2009. We monitored 17 pre-selected districts at risk for low coverage. We designed LQAS plans to reject districts with YF vaccination coverage <90% and with OPV coverage <95%. In each lot the sample size was 50 (five clusters of 10) with decision values of 3 for assessing OPV and 7 for YF coverage. We 'rejected' 10 districts for low YF coverage and 14 for low OPV coverage. Hence we recommended a 2-day extension of the campaign. Clustered-LQAS proved to be useful in guiding the campaign vaccination strategy before the completion of the operations.
Sediment transport monitoring for sustainable hydropower development
NASA Astrophysics Data System (ADS)
Rüther, Nils; Guerrero, Massimo; Stokseth, Siri
2015-04-01
Due to the increasing demand of CO2 neutral energy not only in Europe but also in World, a relatively large amount of new hydro power plants (HPP) are built. In addition, will existing ones refurbished and renewed in order to run them more cost effective. A huge thread to HPPs is incoming sediments in suspension from the rivers upstream. The sediments settle in the reservoir and reduce the effective head and volume and reduce consequently the life time of the reservoir. In addition are the fine sediments causing severe damages to turbines and infrastructure of a HPP. For estimating the amount of incoming sediments in suspension and therefore planning efficient counter measures, it is essential to monitor the rivers within the catchment of the HPP for suspended sediments. This work is considerably time consuming and requires highly educated personnel and is therefore expensive. Consequently will this study present a method to measure suspended sediment concentrations and their grain size distribution with a dual frequency acoustic Doppler current profiler (ADCP). This method is more cost effective and reliable in comparison to traditional measurement methods. Having more detailed information about the sediments being transported in a river, the hydro power plant can be planned, built, and operated much more efficiently and sustainable. The two horizontal ADCPs are installed at a measurement cross section in the Devoll river in Albania. To verify the new method, the suspended load concentrations will be monitored also in the traditional ways at the same cross sections. It is planned to install turbidity measurement devices included with an automatic sampling devices. It is also planned to use an optical in situ measurement device (LISST SL by Sequoia Inc.) to have detailed information of sediment concentration and grain sizes over the depth.
Camacho, A; Eggo, R M; Goeyvaerts, N; Vandebosch, A; Mogg, R; Funk, S; Kucharski, A J; Watson, C H; Vangeneugden, T; Edmunds, W J
2017-01-23
Declining incidence and spatial heterogeneity complicated the design of phase 3 Ebola vaccine trials during the tail of the 2013-16 Ebola virus disease (EVD) epidemic in West Africa. Mathematical models can provide forecasts of expected incidence through time and can account for both vaccine efficacy in participants and effectiveness in populations. Determining expected disease incidence was critical to calculating power and determining trial sample size. In real-time, we fitted, forecasted, and simulated a proposed phase 3 cluster-randomized vaccine trial for a prime-boost EVD vaccine in three candidate regions in Sierra Leone. The aim was to forecast trial feasibility in these areas through time and guide study design planning. EVD incidence was highly variable during the epidemic, especially in the declining phase. Delays in trial start date were expected to greatly reduce the ability to discern an effect, particularly as a trial with an effective vaccine would cause the epidemic to go extinct more quickly in the vaccine arm. Real-time updates of the model allowed decision-makers to determine how trial feasibility changed with time. This analysis was useful for vaccine trial planning because we simulated effectiveness as well as efficacy, which is possible with a dynamic transmission model. It contributed to decisions on choice of trial location and feasibility of the trial. Transmission models should be utilised as early as possible in the design process to provide mechanistic estimates of expected incidence, with which decisions about sample size, location, timing, and feasibility can be determined. Copyright © 2016. Published by Elsevier Ltd.
Comparison of the efficiency between two sampling plans for aflatoxins analysis in maize
Mallmann, Adriano Olnei; Marchioro, Alexandro; Oliveira, Maurício Schneider; Rauber, Ricardo Hummes; Dilkin, Paulo; Mallmann, Carlos Augusto
2014-01-01
Variance and performance of two sampling plans for aflatoxins quantification in maize were evaluated. Eight lots of maize were sampled using two plans: manual, using sampling spear for kernels; and automatic, using a continuous flow to collect milled maize. Total variance and sampling, preparation, and analysis variance were determined and compared between plans through multifactor analysis of variance. Four theoretical distribution models were used to compare aflatoxins quantification distributions in eight maize lots. The acceptance and rejection probabilities for a lot under certain aflatoxin concentration were determined using variance and the information on the selected distribution model to build the operational characteristic curves (OC). Sampling and total variance were lower at the automatic plan. The OC curve from the automatic plan reduced both consumer and producer risks in comparison to the manual plan. The automatic plan is more efficient than the manual one because it expresses more accurately the real aflatoxin contamination in maize. PMID:24948911
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moteabbed, M; Depauw, N; Kooy, H
Purpose: To investigate the dosimetric benefits of pencil beam scanning (PBS) compared with passive scattered (PS) proton therapy for treatment of pediatric head&neck patients as a function of the PBS spot size and explore the advantages of using apertures in PBS. Methods: Ten pediatric patients with head&neck cancers treated by PS proton therapy at our institution were retrospectively selected. The histologies included rhabdomyosarcoma, ependymoma, astrocytoma, craniopharyngioma and germinoma. The prescribed dose ranged from 36 to 54 Gy(RBE). Five PBS plans were created for each patient using variable spot size (average sigma at isocenter) and choice of beam specific apertures: (1)more » 10mm spots, (2) 10mm spots with apertures, (3) 6mm spots, (4) 6mm spots with apertures, and (5) 3mm spots. The plans were optimized for intensity modulated proton therapy (IMPT) with no single beam uniformity constraints. Dose volume indices as well as equivalent uniform dose (EUD) were compared between PS and PBS plans. Results: Although target coverage was clinically adequate for all cases, the plans with largest (10mm) spots provide inferior quality compared with PS in terms of dose to organs-at-risk (OAR). However, adding apertures to these plans ensured lower OAR dose than PS. The average EUD difference between PBS and PS plans over all patients and organs at risk were (1) 2.5%, (2) −5.1%, (3) -5%, (4) −7.8%, and (5) −9.5%. As the spot size decreased, more conformal plans were achieved that offered similar target coverage but lower dose to the neighboring healthy organs, while alleviating the need for using apertures. Conclusion: The application of PBS does not always translate to better plan qualities compared to PS depending on the available beam spot size. We recommend that institutions with spot size larger than ∼6mm at isocenter consider using apertures to guarantee clinically comparable or superior dosimetric efficacy to PS treatments.« less
40 CFR 62.14410 - Are there different emission limits for different locations and sizes of HMIWI?
Code of Federal Regulations, 2010 CFR
2010-07-01
... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF STATE PLANS FOR DESIGNATED FACILITIES AND POLLUTANTS Federal Plan Requirements for Hospital/Medical/Infectious Waste... rural, small, medium, and large HMIWI. To determine the size category of your HMIWI, consult the...
Xu, Jie; Li, Deng; Ma, Ruo-fan; Barden, Bertram; Ding, Yue
2015-11-01
Total hip arthroplasty (THA) is challenging in cases of osteoarthritis secondary to developmental dysplasia of the hip (DDH). Acetabular deficiency makes the positioning of the acetabular component difficult. Computer tomography based, patient-individual three dimensional (3-D) rapid prototype technology (RPT)-models were used to plan the placement of acetabular cup so that a surgeon was able to identify pelvic structures, assess the ideal extent of reaming and determine the size of cup after a reconstructive procedure. Intraclass correlation coefficients (ICCs) were used to analyze the agreement between the sizes of chosen components on the basis of preoperative planning and the actual sizes used in the operation. The use of the 3-D RPT-model facilitates the surgical procedures due to better planning and improved orientation. Copyright © 2015 Elsevier Inc. All rights reserved.
7 CFR 43.103 - Purpose and scope.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... SAMPLING PLANS Sampling Plans § 43.103 Purpose and scope. (a) This subpart contains selected single and double sampling plans for inspection by attributes. They are to serve as a source of plans for developing...
Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin
2017-06-01
A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.
Infant formula samples: perinatal sources and breast-feeding outcomes at 1 month postpartum.
Thurston, Amanda; Bolin, Jocelyn H; Chezem, Jo Carol
2013-01-01
The purpose was to describe sources of infant formula samples during the perinatal period and assess their associations with breast-feeding outcomes at 1 month postpartum. Subjects included expectant mothers who anticipated breast-feeding at least 1 month. Infant feeding history and sources of formula samples were obtained at 1 month postpartum. Associations between sources and breast-feeding outcomes were assessed using partial correlation. Of the 61 subjects who initiated breast-feeding, most were white (87%), married (75%), college-educated (75%), and planned exclusive breast-feeding (82%). Forty-two subjects (69%) continued breast-feeding at 1 month postpartum. Subjects received formula samples from the hospital (n = 40; 66%), physician's office (n = 10; 16%), and mail (n = 41; 67%). There were no significant correlations between formula samples from the hospital, physician's office, and/or mail and any or exclusive breast-feeding at 1 month (P > .05). In addition to the hospital, a long-standing source of formula samples, mail was also frequently reported as a route for distribution. The lack of statistically significant associations between formula samples and any or exclusive breast-feeding at 1 month may be related to small sample size and unique characteristics of the group studied.
Himes Boor, Gina K
2014-02-01
For species listed under the U.S. Endangered Species Act (ESA), the U.S. Fish and Wildlife Service and National Marine Fisheries Service are tasked with writing recovery plans that include "objective, measurable criteria" that define when a species is no longer at risk of extinction, but neither the act itself nor agency guidelines provide an explicit definition of objective, measurable criteria. Past reviews of recovery plans, including one published in 2012, show that many criteria lack quantitative metrics with clear biological rationale and are not meeting the measureable and objective mandate. I reviewed how objective, measureable criteria have been defined implicitly and explicitly in peer-reviewed literature, the ESA, other U.S. statutes, and legal decisions. Based on a synthesis of these sources, I propose the following 6 standards be used as minimum requirements for objective, measurable criteria: contain a quantitative threshold with calculable units, stipulate a timeframe over which they must be met, explicitly define the spatial extent or population to which they apply, specify a sampling procedure that includes sample size, specify a statistical significance level, and include justification by providing scientific evidence that the criteria define a species whose extinction risk has been reduced to the desired level. To meet these 6 standards, I suggest that recovery plans be explicitly guided by and organized around a population viability modeling framework even if data or agency resources are too limited to complete a viability model. When data and resources are available, recovery criteria can be developed from the population viability model results, but when data and resources are insufficient for model implementation, extinction risk thresholds can be used as criteria. A recovery-planning approach centered on viability modeling will also yield appropriately focused data-acquisition and monitoring plans and will facilitate a seamless transition from recovery planning to delisting. © 2013 Society for Conservation Biology.
Enrollment in prescription drug insurance: the interaction of numeracy and choice set size.
Szrek, Helena; Bundorf, M Kate
2014-04-01
To determine how choice set size affects decision quality among individuals of different levels of numeracy choosing prescription drug plans. Members of an Internet-enabled panel age 65 and over were randomly assigned to sets of prescription drug plans varying in size from 2 to 16 plans from which they made a hypothetical choice. They answered questions about enrollment likelihood and the costs and benefits of their choice. The measure of decision quality was enrollment likelihood among those for whom enrollment was beneficial. Enrollment likelihood by numeracy and choice set size was calculated. A model of moderated mediation was analyzed to understand the role of numeracy as a moderator of the relationship between the number of plans and the quality of the enrollment decision and the roles of the costs and benefits in mediating that relationship. More numerate adults made better decisions than less numerate adults when choosing among a small number of alternatives but not when choice sets were larger. Choice set size had little effect on decision making of less numerate adults. Differences in decision making costs between more and less numerate adults helped explain the effect of choice set size on decision quality. Interventions to improve decision making in the context of Medicare Part D may differentially affect lower and higher numeracy adults. The conflicting results on choice overload in the psychology literature may be explained in part by differences amongst individuals in how they respond to choice set size.
Monitoring Sediment Size Distributions in a Regulated Gravel-Bed Coastal Stream
NASA Astrophysics Data System (ADS)
O'Connor, M. D.; Lewis, J.; Andrew, G.
2014-12-01
Lagunitas Creek drains 282 km2 in coastal Marin County, California. The watershed contains water supply reservoirs, urban areas, parks and habitat for threatened species (e.g. coho salmon). Water quality is impaired by excess fine sediment, and a plan to improve water quality (i.e. TMDL) was adopted by State authorities in 2014. The TMDL asserts changes in sediment delivery, transport, and storage contributed to the decline of coho. A sediment source analysis found a 2x increase in sediment supply. Concentrations of sand and fine gravel in the channel are elevated and, during high flows, more mobile. The Federal Coho Salmon Recovery Plan (2012) describes sediment conditions affecting coho habitat as "fair". Reservoir managers were directed by the State in 1995 to reduce sedimentation and improve riparian vegetation and woody debris to improve fish habitat. Prior sediment monitoring found variability related primarily to intense winter runoff without identifying clear trends. A new sediment monitoring program was implemented in 2012 for ongoing quantification of sediment conditions. The goal of monitoring is to determine with specified statistical certainty changes in sediment conditions over time and variation among reaches throughout the watershed. Conditions were compared in 3 reaches of Lagunitas Cr. and 2 tributaries. In each of the 5 channel reaches, 4 shorter reaches were sampled in a systematic grid comprised of 30 cross-channel transects spaced at intervals of 1/2 bankfull width and 10 sample points per transect; n=1200 in 5 channel reaches. Sediment diameter class (one clast), sediment facies (a patch descriptor), and habitat type were observed at each point. Fine sediment depth was measured by probing the thickness of the deposit, providing a means to estimate total volume of fine sediment and a measure of rearing habitat occupied by fine sediment (e.g. V*). Sub-surface sediment samples were collected and analyzed for size distribution at two scales: a larger sample of a spawning site in each sample reach and 20 smaller sub-samples of fine sediment facies. These data provide a robust description of streambed sediment conditions (e.g. % < 1 mm) expected to vary systematically across the watershed (e.g. fining downstream) and over time in response to management of watershed resources.
Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas
2014-01-01
Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357
Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas
2014-01-01
The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.
Transition from Forward Smoldering to Flaming in Small Polyurethane Foam Samples
NASA Technical Reports Server (NTRS)
Bar-Ilan, A.; Putzeys, O.; Rein, G.; Fernandez-Pello, A. C.
2004-01-01
Experimental observations are presented of the effect of the flow velocity and oxygen concentration, and of a thermal radiant flux, on the transition from smoldering to flaming in forward smoldering of small samples of polyurethane foam with a gas/solid interface. The experiments are part of a project studying the transition from smolder to flaming under conditions encountered in spacecraft facilities, i.e., microgravity, low velocity variable oxygen concentration flows. Because the microgravity experiments are planned for the International Space Station, the foam samples had to be limited in size for safety and launch mass reasons. The feasible sample size is too small for smolder to self propagate because of heat losses to the surrounding environment. Thus, the smolder propagation and the transition to flaming had to be assisted by reducing the heat losses to the surroundings and increasing the oxygen concentration. The experiments are conducted with small parallelepiped samples vertically placed in a wind tunnel. Three of the sample lateral-sides are maintained at elevated temperature and the fourth side is exposed to an upward flow and to a radiant flux. It is found that decreasing the flow velocity and increasing its oxygen concentration, and/or increasing the radiant flux enhances the transition to flaming, and reduces the delay time to transition. Limiting external ambient conditions for the transition to flaming are reported for the present experimental set-up. The results show that smolder propagation and the transition to flaming can occur in relatively small fuel samples if the external conditions are appropriate. The results also indicate that transition to flaming occurs in the char left behind by the smolder reaction, and it has the characteristics of a gas-phase ignition induced by the smolder reaction, which acts as the source of both gaseous fuel and heat.
Design, analysis and presentation of factorial randomised controlled trials
Montgomery, Alan A; Peters, Tim J; Little, Paul
2003-01-01
Background The evaluation of more than one intervention in the same randomised controlled trial can be achieved using a parallel group design. However this requires increased sample size and can be inefficient, especially if there is also interest in considering combinations of the interventions. An alternative may be a factorial trial, where for two interventions participants are allocated to receive neither intervention, one or the other, or both. Factorial trials require special considerations, however, particularly at the design and analysis stages. Discussion Using a 2 × 2 factorial trial as an example, we present a number of issues that should be considered when planning a factorial trial. The main design issue is that of sample size. Factorial trials are most often powered to detect the main effects of interventions, since adequate power to detect plausible interactions requires greatly increased sample sizes. The main analytical issues relate to the investigation of main effects and the interaction between the interventions in appropriate regression models. Presentation of results should reflect the analytical strategy with an emphasis on the principal research questions. We also give an example of how baseline and follow-up data should be presented. Lastly, we discuss the implications of the design, analytical and presentational issues covered. Summary Difficulties in interpreting the results of factorial trials if an influential interaction is observed is the cost of the potential for efficient, simultaneous consideration of two or more interventions. Factorial trials can in principle be designed to have adequate power to detect realistic interactions, and in any case they are the only design that allows such effects to be investigated. PMID:14633287
van den Bosch, Frank; Gottwald, Timothy R.; Alonso Chavez, Vasthi
2017-01-01
The spread of pathogens into new environments poses a considerable threat to human, animal, and plant health, and by extension, human and animal wellbeing, ecosystem function, and agricultural productivity, worldwide. Early detection through effective surveillance is a key strategy to reduce the risk of their establishment. Whilst it is well established that statistical and economic considerations are of vital importance when planning surveillance efforts, it is also important to consider epidemiological characteristics of the pathogen in question—including heterogeneities within the epidemiological system itself. One of the most pronounced realisations of this heterogeneity is seen in the case of vector-borne pathogens, which spread between ‘hosts’ and ‘vectors’—with each group possessing distinct epidemiological characteristics. As a result, an important question when planning surveillance for emerging vector-borne pathogens is where to place sampling resources in order to detect the pathogen as early as possible. We answer this question by developing a statistical function which describes the probability distributions of the prevalences of infection at first detection in both hosts and vectors. We also show how this method can be adapted in order to maximise the probability of early detection of an emerging pathogen within imposed sample size and/or cost constraints, and demonstrate its application using two simple models of vector-borne citrus pathogens. Under the assumption of a linear cost function, we find that sampling costs are generally minimised when either hosts or vectors, but not both, are sampled. PMID:28846676
Czigány, Zs; Neidhardt, J; Brunell, I F; Hultman, L
2003-04-01
The microstructure of CN(x) thin films, deposited by reactive magnetron sputtering, was investigated by transmission electron microscopy (TEM) at 200kV in plan-view and cross-sectional samples. Imaging artefacts arise in high-resolution TEM due to overlap of nm-sized fullerene-like features for specimen thickness above 5nm. The thinnest and apparently artefact-free areas were obtained at the fracture edges of plan-view specimens floated-off from NaCl substrates. Cross-sectional samples were prepared by ion-beam milling at low energy to minimize sample preparation artefacts. The depth of the ion-bombardment-induced surface amorphization was determined by TEM cross sections of ion-milled fullerene-like CN(x) surfaces. The thickness of the damaged surface layer at 5 degrees grazing incidence was 13 and 10nm at 3 and 0.8keV, respectively, which is approximately three times larger than that observed on Si prepared under the same conditions. The shallowest damage depth, observed for 0.25keV, was less than 1nm. Chemical changes due to N loss and graphitization were also observed by X-ray photoelectron spectroscopy. As a consequence of chemical effects, sputtering rates of CN(x) films were similar to that of Si, which enables relatively fast ion-milling procedure compared to carbon compounds. No electron beam damage of fullerene-like CN(x) was observed at 200kV.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marutzky, Sam; Farnham, Irene
The purpose of the Nevada National Security Site (NNSS) Integrated Sampling Plan (referred to herein as the Plan) is to provide a comprehensive, integrated approach for collecting and analyzing groundwater samples to meet the needs and objectives of the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Field Office (NNSA/NFO) Underground Test Area (UGTA) Activity. Implementation of this Plan will provide high-quality data required by the UGTA Activity for ensuring public protection in an efficient and cost-effective manner. The Plan is designed to ensure compliance with the UGTA Quality Assurance Plan (QAP). The Plan’s scope comprises sample collectionmore » and analysis requirements relevant to assessing the extent of groundwater contamination from underground nuclear testing. This Plan identifies locations to be sampled by corrective action unit (CAU) and location type, sampling frequencies, sample collection methodologies, and the constituents to be analyzed. In addition, the Plan defines data collection criteria such as well-purging requirements, detection levels, and accuracy requirements; identifies reporting and data management requirements; and provides a process to ensure coordination between NNSS groundwater sampling programs for sampling of interest to UGTA. This Plan does not address compliance with requirements for wells that supply the NNSS public water system or wells involved in a permitted activity.« less
30 CFR 75.221 - Roof control plan information.
Code of Federal Regulations, 2010 CFR
2010-07-01
... main roof above the coalbed and for distance of at least 10 feet below the coalbed; and (iii) Indicate... the liners or arches. (8) Drawings indicating the planned width of openings, size of pillars, method...) The length, diameter, grade and type of anchorage unit to be used; (ii) The drill hole size to be used...
30 CFR 75.221 - Roof control plan information.
Code of Federal Regulations, 2011 CFR
2011-07-01
... main roof above the coalbed and for distance of at least 10 feet below the coalbed; and (iii) Indicate... the liners or arches. (8) Drawings indicating the planned width of openings, size of pillars, method...) The length, diameter, grade and type of anchorage unit to be used; (ii) The drill hole size to be used...
NASA Astrophysics Data System (ADS)
Raju, C.; Vidya, R.
2017-11-01
Chain Sampling Plan is widely used whenever a small sample attributes plan is required to be used for situations involving destructive products coming out of continuous production process [1, 2]. This paper presents a procedure for the construction and selection of a ChSP-1 by attributes inspection based on membership functions [3]. A procedure using search technique is developed for obtaining the parameters of single sampling plan for a given set of AQL and LQL values. A sample of tables providing ChSP-1 plans for various combinations of AQL and LQL values are presented [4].
SU-E-T-68: A Quality Assurance System with a Web Camera for High Dose Rate Brachytherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ueda, Y; Hirose, A; Oohira, S
Purpose: The purpose of this work was to develop a quality assurance (QA) system for high dose rate (HDR) brachytherapy to verify the absolute position of an 192Ir source in real time and to measure dwell time and position of the source simultaneously with a movie recorded by a web camera. Methods: A web camera was fixed 15 cm above a source position check ruler to monitor and record 30 samples of the source position per second over a range of 8.0 cm, from 1425 mm to 1505 mm. Each frame had a matrix size of 480×640 in the movie.more » The source position was automatically quantified from the movie using in-house software (built with LabVIEW) that applied a template-matching technique. The source edge detected by the software on each frame was corrected to reduce position errors induced by incident light from an oblique direction. The dwell time was calculated by differential processing to displacement of the source. The performance of this QA system was illustrated by recording simple plans and comparing the measured dwell positions and time with the planned parameters. Results: This QA system allowed verification of the absolute position of the source in real time. The mean difference between automatic and manual detection of the source edge was 0.04 ± 0.04 mm. Absolute position error can be determined within an accuracy of 1.0 mm at dwell points of 1430, 1440, 1450, 1460, 1470, 1480, 1490, and 1500 mm, in three step sizes and dwell time errors, with an accuracy of 0.1% in more than 10.0 sec of planned time. The mean step size error was 0.1 ± 0.1 mm for a step size of 10.0 mm. Conclusion: This QA system provides quick verifications of the dwell position and time, with high accuracy, for HDR brachytherapy. This work was supported by the Japan Society for the Promotion of Science Core-to-Core program (No. 23003)« less
Flexible sequential designs for multi-arm clinical trials.
Magirr, D; Stallard, N; Jaki, T
2014-08-30
Adaptive designs that are based on group-sequential approaches have the benefit of being efficient as stopping boundaries can be found that lead to good operating characteristics with test decisions based solely on sufficient statistics. The drawback of these so called 'pre-planned adaptive' designs is that unexpected design changes are not possible without impacting the error rates. 'Flexible adaptive designs' on the other hand can cope with a large number of contingencies at the cost of reduced efficiency. In this work, we focus on two different approaches for multi-arm multi-stage trials, which are based on group-sequential ideas, and discuss how these 'pre-planned adaptive designs' can be modified to allow for flexibility. We then show how the added flexibility can be used for treatment selection and sample size reassessment and evaluate the impact on the error rates in a simulation study. The results show that an impressive overall procedure can be found by combining a well chosen pre-planned design with an application of the conditional error principle to allow flexible treatment selection. Copyright © 2014 John Wiley & Sons, Ltd.
Brooker, Simon; Kabatereine, Narcis B.; Myatt, Mark; Stothard, J. Russell; Fenwick, Alan
2007-01-01
Summary Rapid and accurate identification of communities at highest risk of morbidity from schistosomiasis is key for sustainable control. Although school questionnaires can effectively and inexpensively identify communities with a high prevalence of Schistosoma haematobium, parasitological screening remains the preferred option for S. mansoni. To help reduce screening costs, we investigated the validity of Lot Quality Assurance Sampling (LQAS) in classifying schools according categories of S. mansoni prevalence in Uganda, and explored its applicability and cost-effectiveness. First, we evaluated several sampling plans using computer simulation and then field tested one sampling plan in 34 schools in Uganda. Finally, cost-effectiveness of different screening and control strategies (including mass treatment without prior screening) was determined, and sensitivity analysis undertaken to assess the effect of infection levels and treatment costs. In identifying schools with prevalence ≥50%, computer simulations showed that LQAS had high levels of sensitivity and specificity (>90%) at sample sizes <20. The method also provides an ability to classify communities into three prevalence categories. Field testing showed that LQAS where 15 children were sampled had excellent diagnostic performance (sensitivity: 100%, specificity: 96.4%, positive predictive value: 85.7% and negative predictive value: 92.3%). Screening using LQAS was more cost-effective than mass treating all schools (US$ 218 vs. US$ 482 / high prevalence school treated). Threshold analysis indicated that parasitological screening and mass treatment would become equivalent for settings where prevalence exceeds 50% in 75% of schools and for treatment costs of US$ 0.19 per schoolchild. We conclude that, in Uganda, LQAS provides a rapid, valid, and cost-effective method for guiding decision makers in allocating finite resources for the control of schistosomiasis. PMID:15960703
Brooker, Simon; Kabatereine, Narcis B; Myatt, Mark; Russell Stothard, J; Fenwick, Alan
2005-07-01
Rapid and accurate identification of communities at highest risk of morbidity from schistosomiasis is key for sustainable control. Although school questionnaires can effectively and inexpensively identify communities with a high prevalence of Schistosoma haematobium, parasitological screening remains the preferred option for S. mansoni. To help reduce screening costs, we investigated the validity of Lot Quality Assurance Sampling (LQAS) in classifying schools according to categories of S. mansoni prevalence in Uganda, and explored its applicability and cost-effectiveness. First, we evaluated several sampling plans using computer simulation and then field tested one sampling plan in 34 schools in Uganda. Finally, cost-effectiveness of different screening and control strategies (including mass treatment without prior screening) was determined, and sensitivity analysis undertaken to assess the effect of infection levels and treatment costs. In identifying schools with prevalences > or =50%, computer simulations showed that LQAS had high levels of sensitivity and specificity (>90%) at sample sizes <20. The method also provides an ability to classify communities into three prevalence categories. Field testing showed that LQAS where 15 children were sampled had excellent diagnostic performance (sensitivity: 100%, specificity: 96.4%, positive predictive value: 85.7% and negative predictive value: 92.3%). Screening using LQAS was more cost-effective than mass treating all schools (US$218 vs. US$482/high prevalence school treated). Threshold analysis indicated that parasitological screening and mass treatment would become equivalent for settings where prevalence > or =50% in 75% of schools and for treatment costs of US$0.19 per schoolchild. We conclude that, in Uganda, LQAS provides a rapid, valid and cost-effective method for guiding decision makers in allocating finite resources for the control of schistosomiasis.
On evaluating compliance with air pollution levels not to be exceeded more than once per year
NASA Technical Reports Server (NTRS)
Neustadter, H. E.; Sidik, S. M.
1974-01-01
The adequacy is considered of currently practiced monitoring and data reduction techniques for assessing compliance with 24-hour Air Quality Standards (AQS) not to be exceeded more than once per year. The present situation for suspended particulates is discussed. The following conclusions are reached: (1) For typical less than daily sampling (i.e., 60 to 120 24-hour samples per year) the deviation from independence of the data set should not be substantial. (2) The interchange of exponentiation and expectation operations in the EPA data reduction model, underestimates the second highest level by about 4 to 8 percent for typical sigma values. (3) Estimates of the second highest pollution level have associated with them a large statistical variability arising from the finite size of the sample. The 0.95 confidence interval ranges from + or - 40 percent for 120 samples per year to + or - 84 percent for 30 samples per year. (4) The design value suggested by EPA for abatement and/or control planning purposes typically gives a margin of safety of 60 to 120 percent.
Barnes, Andrew J; Hanoch, Yaniv; Wood, Stacey; Liu, Pi-Ju; Rice, Thomas
2012-08-01
Because many seniors choose Medicare Part D plans offering poorer coverage at greater cost, the authors examined the effect of price frames, brand names, and choice set size on participants' ability to choose the lowest cost plan. A 2×2×2 within-subjects design was used with 126 participants aged 18 to 91 years old. Mouselab, a web-based program, allowed participants to choose drug plans across eight trials that varied using numeric or symbolic prices, real or fictitious drug plan names, and three or nine drug plan options. Results from the multilevel models suggest numeric versus symbolic prices decreased the likelihood of choosing the lowest cost plan (-8.0 percentage points, 95% confidence interval=-14.7 to -0.9). The likelihood of choosing the lowest cost plan decreased as the amount of information increased suggesting that decision cues operated independently and collectively when selecting a drug plan. Redesigning the current Medicare Part D plan decision environment could improve seniors' drug plan choices.
Digital templating for THA: a simple computer-assisted application for complex hip arthritis cases.
Hafez, Mahmoud A; Ragheb, Gad; Hamed, Adel; Ali, Amr; Karim, Said
2016-10-01
Total hip arthroplasty (THA) is the standard procedure for end-stage arthritis of the hip. Its technical success relies on preoperative planning of the surgical procedure and virtual setup of the operative performance. Digital hip templating is one methodology of preoperative planning for THA which requires a digital preoperative radiograph and a computer with special software. This is a prospective study involving 23 patients (25 hips) who were candidates for complex THA surgery (unilateral or bilateral). Digital templating is done by radiographic assessment using radiographic magnification correction, leg length discrepancy and correction measurements, acetabular component and femoral component templating as well as neck resection measurement. The overall accuracy for templating the stem implant's exact size is 81%. This percentage increased to 94% when considering sizing within 1 size. Digital templating has proven effective, reliable and essential technique for preoperative planning and accurate prediction of THA sizing and alignment.
Tissue Sampling Guides for Porcine Biomedical Models.
Albl, Barbara; Haesner, Serena; Braun-Reichhart, Christina; Streckel, Elisabeth; Renner, Simone; Seeliger, Frank; Wolf, Eckhard; Wanke, Rüdiger; Blutke, Andreas
2016-04-01
This article provides guidelines for organ and tissue sampling adapted to porcine animal models in translational medical research. Detailed protocols for the determination of sampling locations and numbers as well as recommendations on the orientation, size, and trimming direction of samples from ∼50 different porcine organs and tissues are provided in the Supplementary Material. The proposed sampling protocols include the generation of samples suitable for subsequent qualitative and quantitative analyses, including cryohistology, paraffin, and plastic histology; immunohistochemistry;in situhybridization; electron microscopy; and quantitative stereology as well as molecular analyses of DNA, RNA, proteins, metabolites, and electrolytes. With regard to the planned extent of sampling efforts, time, and personnel expenses, and dependent upon the scheduled analyses, different protocols are provided. These protocols are adjusted for (I) routine screenings, as used in general toxicity studies or in analyses of gene expression patterns or histopathological organ alterations, (II) advanced analyses of single organs/tissues, and (III) large-scale sampling procedures to be applied in biobank projects. Providing a robust reference for studies of porcine models, the described protocols will ensure the efficiency of sampling, the systematic recovery of high-quality samples representing the entire organ or tissue as well as the intra-/interstudy comparability and reproducibility of results. © The Author(s) 2016.
Sampling and Analysis Plan for U.S. Department of Energy Office of Legacy Management Sites
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-10-24
This plan incorporates U.S. Department of Energy (DOE) Office of Legacy Management (LM) standard operating procedures (SOPs) into environmental monitoring activities and will be implemented at all sites managed by LM. This document provides detailed procedures for the field sampling teams so that samples are collected in a consistent and technically defensible manner. Site-specific plans (e.g., long-term surveillance and maintenance plans, environmental monitoring plans) document background information and establish the basis for sampling and monitoring activities. Information will be included in site-specific tabbed sections to this plan, which identify sample locations, sample frequencies, types of samples, field measurements, and associatedmore » analytes for each site. Additionally, within each tabbed section, program directives will be included, when developed, to establish additional site-specific requirements to modify or clarify requirements in this plan as they apply to the corresponding site. A flowchart detailing project tasks required to accomplish routine sampling is displayed in Figure 1. LM environmental procedures are contained in the Environmental Procedures Catalog (LMS/PRO/S04325), which incorporates American Society for Testing and Materials (ASTM), DOE, and U.S. Environmental Protection Agency (EPA) guidance. Specific procedures used for groundwater and surface water monitoring are included in Appendix A. If other environmental media are monitored, SOPs used for air, soil/sediment, and biota monitoring can be found in the site-specific tabbed sections in Appendix D or in site-specific documents. The procedures in the Environmental Procedures Catalog are intended as general guidance and require additional detail from planning documents in order to be complete; the following sections fulfill that function and specify additional procedural requirements to form SOPs. Routine revision of this Sampling and Analysis Plan will be conducted annually at the beginning of each fiscal year when attachments in Appendix D, including program directives and sampling location/analytical tables, will be reviewed by project personnel and updated. The sampling location/analytical tables in Appendix D, however, may have interim updates according to project direction that are not reflected in this plan. Deviations from location/analytical tables in Appendix D prior to sampling will be documented in project correspondence (e.g., startup letters). If significant changes to other aspects of this plan are required before the annual update, then the plan will be revised as needed.« less
NASA Astrophysics Data System (ADS)
Szadkowski, Zbigniew
2016-06-01
The paper presents first results from the Front-End Board (FEB) with the biggest Cyclone® V E FPGA 5CEFA9F31I7N, supporting 8 channels sampled up to 250 MSps @ 14-bit resolution. Considered sampling for the planned upgrade of the Pierre Auger surface detector array is 120 MSps, however, the FEB has been developed with external anti-aliasing filters to keep a maximal flexibility. Six channels are targeted to the SD, two the rest for other experiments like: Auger Engineering Radio Array and additional muon counters. More channels and higher sampling generate larger size of registered events. We used the standard radio channel for a radio transmission from the detectors to the Central Data Acquisition Station (CDAS) to avoid at present a significant modification of a software in both sides: the detector and the CDAS (planned in a future for a final design). Several variants of the FPGA code were tested for 120, 160, 200 and even 240 MSps DAQ. Tests confirmed a stability and reliability of the FEB design in real pampas conditions with more than 40°C daily temperature variation and a strong sun exposition with a limited power budget only from a single solar panel. Seven FEBs have been deployed in a hexagon of test detectors on a dedicated Engineering Array.
Evolving directions in NASA's planetary rover requirements and technology
NASA Astrophysics Data System (ADS)
Weisbin, C. R.; Montemerlo, Mel; Whittaker, W.
1993-10-01
This paper reviews the evolution of NASA's planning for planetary rovers (i.e. robotic vehicles which may be deployed on planetary bodies for exploration, science analysis, and construction) and some of the technology that has been developed to achieve the desired capabilities. The program is comprised of a variety of vehicle sizes and types in order to accommodate a range of potential user needs. This includes vehicles whose weight spans a few kilograms to several thousand kilograms; whose locomotion is implemented using wheels, tracks, and legs; and whose payloads vary from microinstruments to large scale assemblies for construction. We first describe robotic vehicles, and their associated control systems, developed by NASA in the late 1980's as part of a proposed Mars Rover Sample Return (MRSR) mission. Suggested goals at that time for such an MRSR mission included navigating for one to two years across hundreds of kilometers of Martian surface; traversing a diversity of rugged, unknown terrain; collecting and analyzing a variety of samples; and bringing back selected samples to the lander for return to Earth. Subsequently, we present the current plans (considerably more modest) which have evolved both from technological 'lessons learned' in the previous period, and modified aspirations of NASA missions. This paper describes some of the demonstrated capabilities of the developed machines and the technologies which made these capabilities possible.
Evolving directions in NASA's planetary rover requirements and technology
NASA Technical Reports Server (NTRS)
Weisbin, C. R.; Montemerlo, Mel; Whittaker, W.
1993-01-01
This paper reviews the evolution of NASA's planning for planetary rovers (i.e. robotic vehicles which may be deployed on planetary bodies for exploration, science analysis, and construction) and some of the technology that has been developed to achieve the desired capabilities. The program is comprised of a variety of vehicle sizes and types in order to accommodate a range of potential user needs. This includes vehicles whose weight spans a few kilograms to several thousand kilograms; whose locomotion is implemented using wheels, tracks, and legs; and whose payloads vary from microinstruments to large scale assemblies for construction. We first describe robotic vehicles, and their associated control systems, developed by NASA in the late 1980's as part of a proposed Mars Rover Sample Return (MRSR) mission. Suggested goals at that time for such an MRSR mission included navigating for one to two years across hundreds of kilometers of Martian surface; traversing a diversity of rugged, unknown terrain; collecting and analyzing a variety of samples; and bringing back selected samples to the lander for return to Earth. Subsequently, we present the current plans (considerably more modest) which have evolved both from technological 'lessons learned' in the previous period, and modified aspirations of NASA missions. This paper describes some of the demonstrated capabilities of the developed machines and the technologies which made these capabilities possible.
40 CFR 265.92 - Sampling and analysis.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 25 2010-07-01 2010-07-01 false Sampling and analysis. 265.92 Section... FACILITIES Ground-Water Monitoring § 265.92 Sampling and analysis. (a) The owner or operator must obtain and... follow a ground-water sampling and analysis plan. He must keep this plan at the facility. The plan must...
40 CFR 265.92 - Sampling and analysis.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 26 2011-07-01 2011-07-01 false Sampling and analysis. 265.92 Section... FACILITIES Ground-Water Monitoring § 265.92 Sampling and analysis. (a) The owner or operator must obtain and... follow a ground-water sampling and analysis plan. He must keep this plan at the facility. The plan must...
Measuring selected PPCPs in wastewater to estimate the population in different cities in China.
Gao, Jianfa; O'Brien, Jake; Du, Peng; Li, Xiqing; Ort, Christoph; Mueller, Jochen F; Thai, Phong K
2016-10-15
Sampling and analysis of wastewater from municipal wastewater treatment plants (WWTPs) has become a useful tool for understanding exposure to chemicals. Both wastewater based studies and management and planning of the catchment require information on catchment population in the time of monitoring. Recently, a model has been developed and calibrated using selected pharmaceutical and personal care products (PPCPs) measured in influent wastewater for estimating population in different catchments in Australia. The present study aimed at evaluating the feasibility of utilizing this population estimation approach in China. Twenty-four hour composite influent samples were collected from 31 WWTPs in 17 cities with catchment sizes from 200,000-3,450,000 people representing all seven regions of China. The samples were analyzed for 19 PPCPs using liquid chromatography coupled to tandem mass spectrometry in direct injection mode. Eight chemicals were detected in more than 50% of the samples. Significant positive correlations were found between individual PPCP mass loads and population estimates provided by WWTP operators. Using the PPCP mass load modeling approach calibrated with WWTP operator data, we estimated the population size of each catchment with good agreement with WWTP operator values (between 50-200% for all sites and 75-125% for 23 of the 31 sites). Overall, despite much lower detection and relatively high heterogeneity in PPCP consumption across China the model provided a good estimate of the population contributing to a given wastewater sample. Wastewater analysis could also provide objective PPCP consumption status in China. Copyright © 2016 Elsevier B.V. All rights reserved.
Validation of PCR methods for quantitation of genetically modified plants in food.
Hübner, P; Waiblinger, H U; Pietsch, K; Brodmann, P
2001-01-01
For enforcement of the recently introduced labeling threshold for genetically modified organisms (GMOs) in food ingredients, quantitative detection methods such as quantitative competitive (QC-PCR) and real-time PCR are applied by official food control laboratories. The experiences of 3 European food control laboratories in validating such methods were compared to describe realistic performance characteristics of quantitative PCR detection methods. The limit of quantitation (LOQ) of GMO-specific, real-time PCR was experimentally determined to reach 30-50 target molecules, which is close to theoretical prediction. Starting PCR with 200 ng genomic plant DNA, the LOQ depends primarily on the genome size of the target plant and ranges from 0.02% for rice to 0.7% for wheat. The precision of quantitative PCR detection methods, expressed as relative standard deviation (RSD), varied from 10 to 30%. Using Bt176 corn containing test samples and applying Bt176 specific QC-PCR, mean values deviated from true values by -7to 18%, with an average of 2+/-10%. Ruggedness of real-time PCR detection methods was assessed in an interlaboratory study analyzing commercial, homogeneous food samples. Roundup Ready soybean DNA contents were determined in the range of 0.3 to 36%, relative to soybean DNA, with RSDs of about 25%. Taking the precision of quantitative PCR detection methods into account, suitable sample plans and sample sizes for GMO analysis are suggested. Because quantitative GMO detection methods measure GMO contents of samples in relation to reference material (calibrants), high priority must be given to international agreements and standardization on certified reference materials.
[Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].
Suzukawa, Yumi; Toyoda, Hideki
2012-04-01
This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.
Sample Size Estimation: The Easy Way
ERIC Educational Resources Information Center
Weller, Susan C.
2015-01-01
This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…
A contemporary decennial global Landsat sample of changing agricultural field sizes
NASA Astrophysics Data System (ADS)
White, Emma; Roy, David
2014-05-01
Agriculture has caused significant human induced Land Cover Land Use (LCLU) change, with dramatic cropland expansion in the last century and significant increases in productivity over the past few decades. Satellite data have been used for agricultural applications including cropland distribution mapping, crop condition monitoring, crop production assessment and yield prediction. Satellite based agricultural applications are less reliable when the sensor spatial resolution is small relative to the field size. However, to date, studies of agricultural field size distributions and their change have been limited, even though this information is needed to inform the design of agricultural satellite monitoring systems. Moreover, the size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLU change. In many parts of the world field sizes may have increased. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, and impacts on the diffusion of herbicides, pesticides, disease pathogens, and pests. The Landsat series of satellites provide the longest record of global land observations, with 30m observations available since 1982. Landsat data are used to examine contemporary field size changes in a period (1980 to 2010) when significant global agricultural changes have occurred. A multi-scale sampling approach is used to locate global hotspots of field size change by examination of a recent global agricultural yield map and literature review. Nine hotspots are selected where significant field size change is apparent and where change has been driven by technological advancements (Argentina and U.S.), abrupt societal changes (Albania and Zimbabwe), government land use and agricultural policy changes (China, Malaysia, Brazil), and/or constrained by historic patterns of LCLU (Albania, France and India). Landsat images sensed in two time periods, up to 25 years apart, are used to extract field object classifications at each hotspot using a multispectral image segmentation approach. The field size distributions for the two periods are compared statistically and quantify examples of significant increasing field size associated primarily with agricultural technological innovation (Argentina and U.S.) and decreasing field size associated with rapid societal changes (Albania and Zimbabwe). The implications of this research, and the potential of higher spatial resolution data from planned global coverage satellites, to provide improved agricultural monitoring are discussed.
The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education
ERIC Educational Resources Information Center
Slavin, Robert; Smith, Dewi
2009-01-01
Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…
Process R&D for Particle Size Control of Molybdenum Oxide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Sujat; Dzwiniel, Trevor; Pupek, Krzysztof
The primary goal of this study was to produce MoO 3 powder with a particle size range of 50 to 200 μm for use in targets for production of the medical isotope 99Mo. Molybdenum metal powder is commercially produced by thermal reduction of oxides in a hydrogen atmosphere. The most common source material is MoO 3, which is derived by the thermal decomposition of ammonium heptamolybdate (AHM). However, the particle size of the currently produced MoO 3 is too small, resulting in Mo powder that is too fine to properly sinter and press into the desired target. In this study,more » effects of heating rate, heating temperature, gas type, gas flow rate, and isothermal heating were investigated for the decomposition of AHM. The main conclusions were as follows: lower heating rate (2-10°C/min) minimizes breakdown of aggregates, recrystallized samples with millimeter-sized aggregates are resistant to various heat treatments, extended isothermal heating at >600°C leads to significant sintering, and inert gas and high gas flow rate (up to 2000 ml/min) did not significantly affect particle size distribution or composition. In addition, attempts to recover AHM from an aqueous solution by several methods (spray drying, precipitation, and low temperature crystallization) failed to achieve the desired particle size range of 50 to 200 μm. Further studies are planned.« less
Male fertility attitudes: a neglected dimension in Nigerian fertility research.
Adamchak, D J; Adebayo, A
1987-01-01
A sample of 202 male Nigerians enrolled in colleges and graduate schools in the state of Kansas were surveyed to determine their perceptions of population problems in Nigeria; attitudes toward family planning, divorce, and male children; and attitudes toward family size. A major limitation of Nigerian-based fertility research has been the neglect of the role of men in couples' reproductive behavior. The majority of Nigerian students surveyed in this study did not think overpopulation is an impending crisis in Nigeria: 40% thought there are just enough people and 13% indicated there are not enough people. 53% supported the concept of a government population policy, but 67% felt the government should not interfere with family size decisions. Although 84% endorsed the idea that family planning services and information should be available, 69% felt women should not practice family planning without the consent of their husbands. 43% believed a man should divorce his wife if the woman is infertile, unable to produce a male child, or unable to bear the number of children demanded by her husband; in addition, 35% indicated a man should marry a second wife or continue to have children if the couple has 5 daughters and no son. In terms of the value of children, 62% stated that children are wealth or better than wealth, whereas 38% claimed that children use up wealth. Duration of stay in the US was inversely correlated with the number of children considered too many, and the number of male children already born was an important determinant of future family size expectations. In general, it appears that level of education and exposure to US standards do not have a major impact on fertility values among Nigerians, particularly the desire for male children. Educated Nigerian men are an important target for population education, however, because they dominate and control many of the structural, behavioral, and cultural dimensions of fertility behavior.
SU-F-J-23: Field-Of-View Expansion in Cone-Beam CT Reconstruction by Use of Prior Information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haga, A; Magome, T; Nakano, M
Purpose: Cone-beam CT (CBCT) has become an integral part of online patient setup in an image-guided radiation therapy (IGRT). In addition, the utility of CBCT for dose calculation has actively been investigated. However, the limited size of field-of-view (FOV) and resulted CBCT image with a lack of peripheral area of patient body prevents the reliability of dose calculation. In this study, we aim to develop an FOV expanded CBCT in IGRT system to allow the dose calculation. Methods: Three lung cancer patients were selected in this study. We collected the cone-beam projection images in the CBCT-based IGRT system (X-ray volumemore » imaging unit, ELEKTA), where FOV size of the provided CBCT with these projections was 410 × 410 mm{sup 2} (normal FOV). Using these projections, CBCT with a size of 728 × 728 mm{sup 2} was reconstructed by a posteriori estimation algorithm including a prior image constrained compressed sensing (PICCS). The treatment planning CT was used as a prior image. To assess the effectiveness of FOV expansion, a dose calculation was performed on the expanded CBCT image with region-of-interest (ROI) density mapping method, and it was compared with that of treatment planning CT as well as that of CBCT reconstructed by filtered back projection (FBP) algorithm. Results: A posteriori estimation algorithm with PICCS clearly visualized an area outside normal FOV, whereas the FBP algorithm yielded severe streak artifacts outside normal FOV due to under-sampling. The dose calculation result using the expanded CBCT agreed with that using treatment planning CT very well; a maximum dose difference was 1.3% for gross tumor volumes. Conclusion: With a posteriori estimation algorithm, FOV in CBCT can be expanded. Dose comparison results suggested that the use of expanded CBCTs is acceptable for dose calculation in adaptive radiation therapy. This study has been supported by KAKENHI (15K08691).« less
Geologic Studies in Support of Manned Martian Exploration
NASA Astrophysics Data System (ADS)
Frix, Perry; McCloskey, Katherine; Neakrase, Lynn D. V.; Greeley, Ronald
1999-01-01
With the advent of the space exploration of the middle part of this century, Mars has become a tangible target for manned space flight missions in the upcoming decades. The goals of Mars exploration focus mainly on the presence of water and the geologic features associated with it. To explore the feasibility of a manned mission, a field analog project was conducted. The project began by examining a series of aerial photographs representing "descent" space craft images. From the photographs, local and regional geology of the two "landing" sites was determined and several "targets of interest" were chosen. The targets were prioritized based on relevance to achieving the goals of the project and Mars exploration. Traverses to each target, as well as measurements and sample collections were planned, and a timeline for the exercise was created. From this it was found that for any mission to be successful, a balance must be discovered between keeping to the planned timeline schedule, and impromptu revision of the mission to allow for conflicts, problems and other adjustments necessary due to greater information gathered upon arrival at the landing site. At the conclusion of the field exercise, it was determined that a valuable resource for mission planning is high resolution remote sensing of the landing area. This led us to conduct a study to determine what ranges of resolution are necessary to observe geology features important to achieving the goals of Mars exploration. The procedure used involved degrading a set of images to differing resolutions, which were then examined to determine what features could be seen and interpreted. The features were rated for recognizability, the results were tabulated, and a minimum necessary resolution was determined. Our study found that for the streams, boulders, bedrock, and volcanic features that we observed, a resolution of at least 1 meter/pixel is necessary. We note though that this resolution depends on the size of the feature being observed, and thus for Mars the resolution may be lower due to the larger size of some features. With this new information, we then examined the highest resolution images taken to date by the Mars Orbital Camera on board the Mars Global Surveyor, and planned a manned mission. We chose our site keeping in mind the goals for Mars exploration, then determined the local and regional geolog of the "landing area. Prioritization was then done on the geologic features seen and traverses were planned to various "targets of interest". A schedule for each traverse stop, including what measurements and samples were to br taken, and a timeline for the mission was then created with ample time allowed for revisions of plans, new discoveries, and possible complications.
Phylogenetic effective sample size.
Bartoszek, Krzysztof
2016-10-21
In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. Copyright © 2016 Elsevier Ltd. All rights reserved.
SU-F-T-443: Quantification of Dosimetric Effects of Dental Metallic Implant On VMAT Plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, C; Jiang, W; Feng, Y
Purpose: To evaluate the dosimetric impact of metallic implant that correlates with the size of targets and metallic implants and distance in between on volumetric-modulated arc therapy (VMAT) plans for head and neck (H&N) cancer patients with dental metallic implant. Methods: CT images of H&N cancer patients with dental metallic implant were used. Target volumes with different sizes and locations were contoured. Metal artifact regions excluding surrounding critical organs were outlined and assigned with CT numbers close to water (0HU). VMAT plans with half-arc, one-full-arc and two-full-arcs were constructed and same plans were applied to structure sets with and withoutmore » CT number assignment of metal artifact regions and compared. D95% was utilized to investigate PTV dose coverage and SNC Patient− Software was used for the analysis of dose distribution difference slice by slice. Results: For different targets sizes, variation of PTV dose coverage (Delta-D95%) with and without CT number replacement reduced with larger target volume for all half-arc, one-arc and two-arc VMAT plans even though there were no clinically significant differences. Additionally, there were no significant variations of the maximum percent difference (max.%diff) of dose distribution. With regard to the target location, Delta-D95% and max. %diff dropped with increasing distance between target and metallic implant. Furthermore, half-arc plans showed greater impact than one-arc plans, and two-arc plans had smallest influence for PTV dose coverage and dose distribution. Conclusion: The target size has less correlation of doseimetric impact than the target location relative to metallic implants. Plans with more arcs alleviate the dosimetric effect of metal artifact because of less contribution to the target dose from beams going through the regions with metallic artifacts. Incorrect CT number causes inaccurate dose distribution, therefore appropriately overwriting metallic artifact regions with reasonable CT numbers is recommended. More patient data are collected and under further analysis.« less
Paradigms for adaptive statistical information designs: practical experiences and strategies.
Wang, Sue-Jane; Hung, H M James; O'Neill, Robert
2012-11-10
In the last decade or so, interest in adaptive design clinical trials has gradually been directed towards their use in regulatory submissions by pharmaceutical drug sponsors to evaluate investigational new drugs. Methodological advances of adaptive designs are abundant in the statistical literature since the 1970s. The adaptive design paradigm has been enthusiastically perceived to increase the efficiency and to be more cost-effective than the fixed design paradigm for drug development. Much interest in adaptive designs is in those studies with two-stages, where stage 1 is exploratory and stage 2 depends upon stage 1 results, but where the data of both stages will be combined to yield statistical evidence for use as that of a pivotal registration trial. It was not until the recent release of the US Food and Drug Administration Draft Guidance for Industry on Adaptive Design Clinical Trials for Drugs and Biologics (2010) that the boundaries of flexibility for adaptive designs were specifically considered for regulatory purposes, including what are exploratory goals, and what are the goals of adequate and well-controlled (A&WC) trials (2002). The guidance carefully described these distinctions in an attempt to minimize the confusion between the goals of preliminary learning phases of drug development, which are inherently substantially uncertain, and the definitive inference-based phases of drug development. In this paper, in addition to discussing some aspects of adaptive designs in a confirmatory study setting, we underscore the value of adaptive designs when used in exploratory trials to improve planning of subsequent A&WC trials. One type of adaptation that is receiving attention is the re-estimation of the sample size during the course of the trial. We refer to this type of adaptation as an adaptive statistical information design. Specifically, a case example is used to illustrate how challenging it is to plan a confirmatory adaptive statistical information design. We highlight the substantial risk of planning the sample size for confirmatory trials when information is very uninformative and stipulate the advantages of adaptive statistical information designs for planning exploratory trials. Practical experiences and strategies as lessons learned from more recent adaptive design proposals will be discussed to pinpoint the improved utilities of adaptive design clinical trials and their potential to increase the chance of a successful drug development. Published 2012. This article is a US Government work and is in the public domain in the USA.
Sensor Technologies for Particulate Detection and Characterization
NASA Technical Reports Server (NTRS)
Greenberg, Paul S.
2008-01-01
Planned Lunar missions have resulted in renewed attention to problems attributable to fine particulates. While the difficulties experienced during the sequence of Apollo missions did not prove critical in all cases, the comparatively long duration of impending missions may present a different situation. This situation creates the need for a spectrum of particulate sensing technologies. From a fundamental perspective, an improved understanding of the properties of the dust fraction is required. Described here is laboratory-based reference instrumentation for the measurement of fundamental particle size distribution (PSD) functions from 2.5 nanometers to 20 micrometers. Concomitant efforts for separating samples into fractional size bins are also presented. A requirement also exists for developing mission compatible sensors. Examples include provisions for air quality monitoring in spacecraft and remote habitation modules. Required sensor attributes such as low mass, volume, and power consumption, autonomy of operation, and extended reliability cannot be accommodated by existing technologies.
Methods for the behavioral, educational, and social sciences: an R package.
Kelley, Ken
2007-11-01
Methods for the Behavioral, Educational, and Social Sciences (MBESS; Kelley, 2007b) is an open source package for R (R Development Core Team, 2007b), an open source statistical programming language and environment. MBESS implements methods that are not widely available elsewhere, yet are especially helpful for the idiosyncratic techniques used within the behavioral, educational, and social sciences. The major categories of functions are those that relate to confidence interval formation for noncentral t, F, and chi2 parameters, confidence intervals for standardized effect sizes (which require noncentral distributions), and sample size planning issues from the power analytic and accuracy in parameter estimation perspectives. In addition, MBESS contains collections of other functions that should be helpful to substantive researchers and methodologists. MBESS is a long-term project that will continue to be updated and expanded so that important methods can continue to be made available to researchers in the behavioral, educational, and social sciences.
ICP Corporate Customer Assessment - Sampling Plan
1995-07-01
CORPORATE CUSTOMER ASSESSMENT - SAMPLING PLAN JULY 1995 Lead Analyst: Lieutenant Commander William J. Wilkinson, USN Associate Analyst: Mr. Henry J...project developed a plan for conducting recurring surveys of Defense Logistics Agency customers , in support of the DLA Corporate Customer Assessment...Team. The primary product was a sampling plan, including stratification of customers by Military Service or Federal Agency and by commodity purchased
The endothelial sample size analysis in corneal specular microscopy clinical examinations.
Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci
2012-05-01
To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.
Brown, Gary S; Betty, Rita G; Brockmann, John E; Lucero, Daniel A; Souza, Caroline A; Walsh, Kathryn S; Boucher, Raymond M; Tezak, Matthew S; Wilson, Mollye C
2007-07-01
Vacuum filter socks were evaluated for recovery efficiency of powdered Bacillus atrophaeus spores from two non-porous surfaces, stainless steel and painted wallboard and two porous surfaces, carpet and bare concrete. Two surface coupons were positioned side-by-side and seeded with aerosolized Bacillus atrophaeus spores. One of the surfaces, a stainless steel reference coupon, was sized to fit into a sample vial for direct spore removal, while the other surface, a sample surface coupon, was sized for a vacuum collection application. Deposited spore material was directly removed from the reference coupon surface and cultured for enumeration of colony forming units (CFU), while deposited spore material was collected from the sample coupon using the vacuum filter sock method, extracted by sonication and cultured for enumeration. Recovery efficiency, which is a measure of overall transfer effectiveness from the surface to culture, was calculated as the number of CFU enumerated from the filter sock sample per unit area relative to the number of CFU enumerated from the co-located reference coupon per unit area. The observed mean filter sock recovery efficiency from stainless steel was 0.29 (SD = 0.14, n = 36), from painted wallboard was 0.25 (SD = 0.15, n = 36), from carpet was 0.28 (SD = 0.13, n = 40) and from bare concrete was 0.19 (SD = 0.14, n = 44). Vacuum filter sock recovery quantitative limits of detection were estimated at 105 CFU m(-2) from stainless steel and carpet, 120 CFU m(-2) from painted wallboard and 160 CFU m(-2) from bare concrete. The method recovery efficiency and limits of detection established in this work provide useful guidance for the planning of incident response environmental sampling for biological agents such as Bacillus anthracis.
Fixed precision sampling plans for white apple leafhopper (Homoptera: Cicadellidae) on apple.
Beers, Elizabeth H; Jones, Vincent P
2004-10-01
Constant precision sampling plans for the white apple leafhopper, Typhlocyba pomaria McAtee, were developed so that it could be used as an indicator species for system stability as new integrated pest management programs without broad-spectrum pesticides are developed. Taylor's power law was used to model the relationship between the mean and the variance, and Green's constant precision sequential sample equation was used to develop sampling plans. Bootstrap simulations of the sampling plans showed greater precision (D = 0.25) than the desired precision (Do = 0.3), particularly at low mean population densities. We found that by adjusting the Do value in Green's equation to 0.4, we were able to reduce the average sample number by 25% and provided an average D = 0.31. The sampling plan described allows T. pomaria to be used as reasonable indicator species of agroecosystem stability in Washington apple orchards.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whicker, Jeffrey Jay; Gillis, Jessica Mcdonnel; Ruedig, Elizabeth
This report summarizes the sampling design used, associated statistical assumptions, as well as general guidelines for conducting post-sampling data analysis. Sampling plan components presented here include how many sampling locations to choose and where within the sampling area to collect those samples. The type of medium to sample (i.e., soil, groundwater, etc.) and how to analyze the samples (in-situ, fixed laboratory, etc.) are addressed in other sections of the sampling plan.
Metam Sodium and Metam Potassium Fumigant Management Plan Phase 2 Templates
These templates provide a framework for structuring and reporting a plan for this type of pesticide product. Required data fields include application block size, buffer zone signage, soil conditions, and tarp plans.
Public Opinion Polls, Chicken Soup and Sample Size
ERIC Educational Resources Information Center
Nguyen, Phung
2005-01-01
Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.
NASA Astrophysics Data System (ADS)
Kangloan, Pichet; Chayaburakul, Kanokporn; Santiboon, Toansakul
2018-01-01
The aims of this research study were 1) to develop students' learning achievements in biology course on foundational cell issue, 2) to examine students' satisfactions of their learning activities through the mixed media according to internet-based multi-instruction in biology on foundational cell issue at the 10th grade level were used in the first semester in the academic year 2014, which a sample size of 17 students in Rangsit University Demonstration School with cluster random sampling was selected. Students' learning administrations were instructed with the 3-instructional lesson plans according to the 5-Step Ladder Learning Management Plan (LLMP) namely; the maintaining lesson plan on the equilibrium of cell issue, a lesson plan for learning how to communicate between cell and cell division. Students' learning achievements were assessed with the 30-item Assessment of Learning Biology Test (ALBT), students' perceptions of their satisfactions were satisfied with the 20-item Questionnaire on Students Satisfaction (QSS), and students' learning activities were assessed with the Mixed Media Internet-Based Instruction (MMIBI) on foundational cell issue was designed. The results of this research study have found that: statistically significant of students' post-learning achievements were higher than their pre-learning outcomes and indicated that the differences were significant at the .05 level. Students' performances of their satisfaction to their perceptions toward biology class with the mixed media according to internet-based multi instruction in biology on foundational cell issue were the highest level and evidence of average mean score as 4.59.
Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.
Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto
2007-07-01
To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.
Planning and Equipping Industrial Arts Facilities.
ERIC Educational Resources Information Center
Maine State Dept. of Educational and Cultural Services, Augusta. Bureau of Vocational Education.
Architectural details, planning, and facility guidelines for industrial arts facilities are given, with data on planning the number, shape, size, and location of school shops. Industrial art programing and performance criteria for varying levels of education are discussed with regard for the different shop curriculums. The facility planning is…
Rigamonti, Ivo E; Brambilla, Carla; Colleoni, Emanuele; Jermini, Mauro; Trivellone, Valeria; Baumgärtner, Johann
2016-04-01
The paper deals with the study of the spatial distribution and the design of sampling plans for estimating nymph densities of the grape leafhopper Scaphoideus titanus Ball in vine plant canopies. In a reference vineyard sampled for model parameterization, leaf samples were repeatedly taken according to a multistage, stratified, random sampling procedure, and data were subjected to an ANOVA. There were no significant differences in density neither among the strata within the vineyard nor between the two strata with basal and apical leaves. The significant differences between densities on trunk and productive shoots led to the adoption of two-stage (leaves and plants) and three-stage (leaves, shoots, and plants) sampling plans for trunk shoots- and productive shoots-inhabiting individuals, respectively. The mean crowding to mean relationship used to analyze the nymphs spatial distribution revealed aggregated distributions. In both the enumerative and the sequential enumerative sampling plans, the number of leaves of trunk shoots, and of leaves and shoots of productive shoots, was kept constant while the number of plants varied. In additional vineyards data were collected and used to test the applicability of the distribution model and the sampling plans. The tests confirmed the applicability 1) of the mean crowding to mean regression model on the plant and leaf stages for representing trunk shoot-inhabiting distributions, and on the plant, shoot, and leaf stages for productive shoot-inhabiting nymphs, 2) of the enumerative sampling plan, and 3) of the sequential enumerative sampling plan. In general, sequential enumerative sampling was more cost efficient than enumerative sampling.
Negotiation-based Order Lot-Sizing Approach for Two-tier Supply Chain
NASA Astrophysics Data System (ADS)
Chao, Yuan; Lin, Hao Wen; Chen, Xili; Murata, Tomohiro
This paper focuses on a negotiation based collaborative planning process for the determination of order lot-size over multi-period planning, and confined to a two-tier supply chain scenario. The aim is to study how negotiation based planning processes would be used to refine locally preferred ordering patterns, which would consequently affect the overall performance of the supply chain in terms of costs and service level. Minimal information exchanges in the form of mathematical models are suggested to represent the local preferences and used to support the negotiation processes.
Vojtíšek, Radovan; Mužík, Jan; Slampa, Pavel; Budíková, Marie; Hejsek, Jaroslav; Smolák, Petr; Ferda, Jiří; Fínek, Jindřich
2014-05-01
To compare radiotherapy plans made according to CT and PET/CT and to investigate the impact of changes in target volumes on tumour control probability (TCP), normal tissue complication probability (NTCP) and the impact of PET/CT on the staging and treatment strategy. Contemporary studies have proven that PET/CT attains higher sensitivity and specificity in the diagnosis of lung cancer and also leads to higher accuracy than CT alone in the process of target volume delineation in NSCLC. Between October 2009 and March 2012, 31 patients with locally advanced NSCLC, who had been referred to radical radiotherapy were involved in our study. They all underwent planning PET/CT examination. Then we carried out two separate delineations of target volumes and two radiotherapy plans and we compared the following parameters of those plans: staging, treatment purpose, the size of GTV and PTV and the exposure of organs at risk (OAR). TCP and NTCP were also compared. PET/CT information led to a significant decrease in the sizes of target volumes, which had the impact on the radiation exposure of OARs. The reduction of target volume sizes was not reflected in the significant increase of the TCP value. We found that there is a very strong direct linear relationship between all evaluated dosimetric parameters and NTCP values of all evaluated OARs. Our study found that the use of planning PET/CT in the radiotherapy planning of NSCLC has a crucial impact on the precise determination of target volumes, more precise staging of the disease and thus also on possible changes of treatment strategy.
Sensor-Oriented Path Planning for Multiregion Surveillance with a Single Lightweight UAV SAR
Li, Jincheng; Chen, Jie; Wang, Pengbo; Li, Chunsheng
2018-01-01
In the surveillance of interested regions by unmanned aerial vehicle (UAV), system performance relies greatly on the motion control strategy of the UAV and the operation characteristics of the onboard sensors. This paper investigates the 2D path planning problem for the lightweight UAV synthetic aperture radar (SAR) system in an environment of multiple regions of interest (ROIs), the sizes of which are comparable to the radar swath width. Taking into account the special requirements of the SAR system on the motion of the platform, we model path planning for UAV SAR as a constrained multiobjective optimization problem (MOP). Based on the fact that the UAV route can be designed in the map image, an image-based path planner is proposed in this paper. First, the neighboring ROIs are merged by the morphological operation. Then, the parts of routes for data collection of the ROIs can be located according to the geometric features of the ROIs and the observation geometry of UAV SAR. Lastly, the route segments for ROIs surveillance are connected by a path planning algorithm named the sampling-based sparse A* search (SSAS) algorithm. Simulation experiments in real scenarios demonstrate that the proposed sensor-oriented path planner can improve the reconnaissance performance of lightweight UAV SAR greatly compared with the conventional zigzag path planner. PMID:29439447
Sensor-Oriented Path Planning for Multiregion Surveillance with a Single Lightweight UAV SAR.
Li, Jincheng; Chen, Jie; Wang, Pengbo; Li, Chunsheng
2018-02-11
In the surveillance of interested regions by unmanned aerial vehicle (UAV), system performance relies greatly on the motion control strategy of the UAV and the operation characteristics of the onboard sensors. This paper investigates the 2D path planning problem for the lightweight UAV synthetic aperture radar (SAR) system in an environment of multiple regions of interest (ROIs), the sizes of which are comparable to the radar swath width. Taking into account the special requirements of the SAR system on the motion of the platform, we model path planning for UAV SAR as a constrained multiobjective optimization problem (MOP). Based on the fact that the UAV route can be designed in the map image, an image-based path planner is proposed in this paper. First, the neighboring ROIs are merged by the morphological operation. Then, the parts of routes for data collection of the ROIs can be located according to the geometric features of the ROIs and the observation geometry of UAV SAR. Lastly, the route segments for ROIs surveillance are connected by a path planning algorithm named the sampling-based sparse A* search (SSAS) algorithm. Simulation experiments in real scenarios demonstrate that the proposed sensor-oriented path planner can improve the reconnaissance performance of lightweight UAV SAR greatly compared with the conventional zigzag path planner.
Galvan, T L; Burkness, E C; Hutchison, W D
2007-06-01
To develop a practical integrated pest management (IPM) system for the multicolored Asian lady beetle, Harmonia axyridis (Pallas) (Coleoptera: Coccinellidae), in wine grapes, we assessed the spatial distribution of H. axyridis and developed eight sampling plans to estimate adult density or infestation level in grape clusters. We used 49 data sets collected from commercial vineyards in 2004 and 2005, in Minnesota and Wisconsin. Enumerative plans were developed using two precision levels (0.10 and 0.25); the six binomial plans reflected six unique action thresholds (3, 7, 12, 18, 22, and 31% of cluster samples infested with at least one H. axyridis). The spatial distribution of H. axyridis in wine grapes was aggregated, independent of cultivar and year, but it was more randomly distributed as mean density declined. The average sample number (ASN) for each sampling plan was determined using resampling software. For research purposes, an enumerative plan with a precision level of 0.10 (SE/X) resulted in a mean ASN of 546 clusters. For IPM applications, the enumerative plan with a precision level of 0.25 resulted in a mean ASN of 180 clusters. In contrast, the binomial plans resulted in much lower ASNs and provided high probabilities of arriving at correct "treat or no-treat" decisions, making these plans more efficient for IPM applications. For a tally threshold of one adult per cluster, the operating characteristic curves for the six action thresholds provided binomial sequential sampling plans with mean ASNs of only 19-26 clusters, and probabilities of making correct decisions between 83 and 96%. The benefits of the binomial sampling plans are discussed within the context of improving IPM programs for wine grapes.
Final report : sampling plan for pavement condition ratings of secondary roads.
DOT National Transportation Integrated Search
1984-01-01
The purpose of this project was to develop a random sampling plan for use in selecting segments of the secondary highway system for evaluation under the Department's PMS. The plan developed is described here. It is a simple, workable, random sampling...
Legacy lost: genetic variability and population size of extirpated US grey wolves (Canis lupus).
Leonard, Jennifer A; Vilà, Carles; Wayne, Robert K
2005-01-01
By the mid 20th century, the grey wolf (Canis lupus) was exterminated from most of the conterminous United States (cUS) and Mexico. However, because wolves disperse over long distances, extant populations in Canada and Alaska might have retained a substantial proportion of the genetic diversity once found in the cUS. We analysed mitochondrial DNA sequences of 34 pre-extermination wolves and found that they had more than twice the diversity of their modern conspecifics, implying a historic population size of several hundred thousand wolves in the western cUS and Mexico. Further, two-thirds of the haplotypes found in the historic sample are unique. Sequences from Mexican grey wolves (C. l. baileyi) and some historic grey wolves defined a unique southern clade supporting a much wider geographical mandate for the reintroduction of Mexican wolves than currently planned. Our results highlight the genetic consequences of population extinction within Ice Age refugia and imply that restoration goals for grey wolves in the western cUS include far less area and target vastly lower population sizes than existed historically.
NASA Technical Reports Server (NTRS)
Hen, Itay; Rieffel, Eleanor G.; Do, Minh; Venturelli, Davide
2014-01-01
There are two common ways to evaluate algorithms: performance on benchmark problems derived from real applications and analysis of performance on parametrized families of problems. The two approaches complement each other, each having its advantages and disadvantages. The planning community has concentrated on the first approach, with few ways of generating parametrized families of hard problems known prior to this work. Our group's main interest is in comparing approaches to solving planning problems using a novel type of computational device - a quantum annealer - to existing state-of-the-art planning algorithms. Because only small-scale quantum annealers are available, we must compare on small problem sizes. Small problems are primarily useful for comparison only if they are instances of parametrized families of problems for which scaling analysis can be done. In this technical report, we discuss our approach to the generation of hard planning problems from classes of well-studied NP-complete problems that map naturally to planning problems or to aspects of planning problems that many practical planning problems share. These problem classes exhibit a phase transition between easy-to-solve and easy-to-show-unsolvable planning problems. The parametrized families of hard planning problems lie at the phase transition. The exponential scaling of hardness with problem size is apparent in these families even at very small problem sizes, thus enabling us to characterize even very small problems as hard. The families we developed will prove generally useful to the planning community in analyzing the performance of planning algorithms, providing a complementary approach to existing evaluation methods. We illustrate the hardness of these problems and their scaling with results on four state-of-the-art planners, observing significant differences between these planners on these problem families. Finally, we describe two general, and quite different, mappings of planning problems to QUBOs, the form of input required for a quantum annealing machine such as the D-Wave II.
Simple, Defensible Sample Sizes Based on Cost Efficiency
Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.
2009-01-01
Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055
RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.
Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu
2018-05-30
One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.
Production of medical radioactive isotopes using KIPT electron driven subcritical facility.
Talamo, Alberto; Gohar, Yousry
2008-05-01
Kharkov Institute of Physics and Technology (KIPT) of Ukraine in collaboration with Argonne National Laboratory (ANL) has a plan to construct an electron accelerator driven subcritical assembly. One of the facility objectives is the production of medical radioactive isotopes. This paper presents the ANL collaborative work performed for characterizing the facility performance for producing medical radioactive isotopes. First, a preliminary assessment was performed without including the self-shielding effect of the irradiated samples. Then, more detailed investigation was carried out including the self-shielding effect, which defined the sample size and location for producing each medical isotope. In the first part, the reaction rates were calculated as the multiplication of the cross section with the unperturbed neutron flux of the facility. Over fifty isotopes have been considered and all transmutation channels are used including (n, gamma), (n, 2n), (n, p), and (gamma, n). In the second part, the parent isotopes with high reaction rate were explicitly modeled in the calculations. Four irradiation locations were considered in the analyses to study the medical isotope production rate. The results show the self-shielding effect not only reduces the specific activity but it also changes the irradiation location that maximizes the specific activity. The axial and radial distributions of the parent capture rates have been examined to define the irradiation sample size of each parent isotope.
Code of Federal Regulations, 2012 CFR
2012-10-01
... research, planning, development, design, construction, alteration, or repair of real property; and (3..., evaluations, consultations, comprehensive planning, program management, conceptual designs, plans and... whether a modification is minor include the value and size of the modification and the comparative value...
Citizen Transportation Planning: A Working Model
DOT National Transportation Integrated Search
1998-09-16
All communities, regardless of their location or size, face the need to re-think : and plan their transportation futures. Historically, many communities have left : planning to outside sources; whether it was the district level of a state's : transpo...
Park, Myung Sook; Kang, Kyung Ja; Jang, Sun Joo; Lee, Joo Yun; Chang, Sun Ju
2018-03-01
This study aimed to evaluate the components of test-retest reliability including time interval, sample size, and statistical methods used in patient-reported outcome measures in older people and to provide suggestions on the methodology for calculating test-retest reliability for patient-reported outcomes in older people. This was a systematic literature review. MEDLINE, Embase, CINAHL, and PsycINFO were searched from January 1, 2000 to August 10, 2017 by an information specialist. This systematic review was guided by both the Preferred Reporting Items for Systematic Reviews and Meta-Analyses checklist and the guideline for systematic review published by the National Evidence-based Healthcare Collaborating Agency in Korea. The methodological quality was assessed by the Consensus-based Standards for the selection of health Measurement Instruments checklist box B. Ninety-five out of 12,641 studies were selected for the analysis. The median time interval for test-retest reliability was 14days, and the ratio of sample size for test-retest reliability to the number of items in each measure ranged from 1:1 to 1:4. The most frequently used statistical methods for continuous scores was intraclass correlation coefficients (ICCs). Among the 63 studies that used ICCs, 21 studies presented models for ICC calculations and 30 studies reported 95% confidence intervals of the ICCs. Additional analyses using 17 studies that reported a strong ICC (>0.09) showed that the mean time interval was 12.88days and the mean ratio of the number of items to sample size was 1:5.37. When researchers plan to assess the test-retest reliability of patient-reported outcome measures for older people, they need to consider an adequate time interval of approximately 13days and the sample size of about 5 times the number of items. Particularly, statistical methods should not only be selected based on the types of scores of the patient-reported outcome measures, but should also be described clearly in the studies that report the results of test-retest reliability. Copyright © 2017 Elsevier Ltd. All rights reserved.
Installing a Microcomputer Lab in a Medium-Sized Academic Library.
ERIC Educational Resources Information Center
Hallman, Clark N.; And Others
Designed to serve as a blueprint for other libraries developing plans for microcomputer facilities, this report describes the planning and implementation of a microcomputer laboratory at South Dakota State University's Hilton M. Briggs Library. The university's plan for installing microcomputer labs on campus and the initial planning process…
Nevada National Security Site Integrated Groundwater Sampling Plan, Revision 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilborn, Bill R.; Boehlecke, Robert F.
The purpose is to provide a comprehensive, integrated approach for collecting and analyzing groundwater samples to meet the needs and objectives of the DOE/EM Nevada Program’s UGTA Activity. Implementation of this Plan will provide high-quality data required by the UGTA Activity for ensuring public protection in an efficient and cost-effective manner. The Plan is designed to ensure compliance with the UGTA Quality Assurance Plan (QAP) (NNSA/NFO, 2015); Federal Facility Agreement and Consent Order (FFACO) (1996, as amended); and DOE Order 458.1, Radiation Protection of the Public and the Environment (DOE, 2013). The Plan’s scope comprises sample collection and analysis requirementsmore » relevant to assessing both the extent of groundwater contamination from underground nuclear testing and impact of testing on water quality in downgradient communities. This Plan identifies locations to be sampled by CAU and location type, sampling frequencies, sample collection methodologies, and the constituents to be analyzed. In addition, the Plan defines data collection criteria such as well purging, detection levels, and accuracy requirements/recommendations; identifies reporting and data management requirements; and provides a process to ensure coordination between NNSS groundwater sampling programs for sampling analytes of interest to UGTA. Information used in the Plan development—including the rationale for selection of wells, sampling frequency, and the analytical suite—is discussed under separate cover (N-I, 2014) and is not reproduced herein. This Plan does not address compliance for those wells involved in a permitted activity. Sampling and analysis requirements associated with these wells are described in their respective permits and are discussed in NNSS environmental reports (see Section 5.2). In addition, sampling for UGTA CAUs that are in the Closure Report (CR) stage are not included in this Plan. Sampling requirements for these CAUs are described in the CR. Frenchman Flat is currently the only UGTA CAU in the CR stage. Sampling requirements for this CAU are described in Underground Test Area (UGTA) Closure Report for Corrective Action Unit 98: Frenchman Flat Nevada National Security Site, Nevada (NNSA/NFO, 2016).« less
Influence of multiple brain metastases’ size and number on the quality of SRS - VMAT dose delivery
NASA Astrophysics Data System (ADS)
Prentou, G.; Koutsouveli, E.; Pantelis, E.; Papagiannis, P.; Georgiou, E.; Karaiskos, P.
2017-11-01
Stereotactic radiosurgery with volumetric modulated arc therapy (SRS-VMAT) has recently been introduced for treatment of multiple brain metastases with a single isocenter. The technique’s high efficiency is nevertheless dependent of metastatic tumors’ characteristics such as size and number. In this work the impact of the metastases’ size and number on the plan quality indices clinically used for plan evaluation and acceptance is investigated. Fifteen targets with a diameter of 1 cm and average volume of 0.7 cm3 and ten targets with a diameter of 2 cm and average volume of 6.5 cm3 were contoured on an anonymized patient CT dataset, in Monaco (Elekta) treatment planning system. VMAT plans for different target volumes (1 and 2 cm in diameter) and various target numbers (1-15) were generated using four non-coplanar arcs and the Agility (Elekta) linear accelerator (5 mm MLC width) using a Monte Carlo dose calculation algorithm and 1mm dose calculation grid resolution. Conformity index (CI), gradient index (GI) and heterogeneity index (HI) were determined for each target. High quality plans were created for both 1 cm and 2 cm in diameter targets for limited (<6) number of targets per plan. For increased number of irradiated targets (>6) both CI and GI, clinically used for plan evaluation and acceptance, were found to deteriorate.
Thermal probe design for Europa sample acquisition
NASA Astrophysics Data System (ADS)
Horne, Mera F.
2018-01-01
The planned lander missions to the surface of Europa will access samples from the subsurface of the ice in a search for signs of life. A small thermal drill (probe) is proposed to meet the sample requirement of the Science Definition Team's (SDT) report for the Europa mission. The probe is 2 cm in diameter and 16 cm in length and is designed to access the subsurface to 10 cm deep and to collect five ice samples of 7 cm3 each, approximately. The energy required to penetrate the top 10 cm of ice in a vacuum is 26 Wh, approximately, and to melt 7 cm3 of ice is 1.2 Wh, approximately. The requirement stated in the SDT report of collecting samples from five different sites can be accommodated with repeated use of the same thermal drill. For smaller sample sizes, a smaller probe of 1.0 cm in diameter with the same length of 16 cm could be utilized that would require approximately 6.4 Wh to penetrate the top 10 cm of ice, and 0.02 Wh to collect 0.1 g of sample. The thermal drill has the advantage of simplicity of design and operations and the ability to penetrate ice over a range of densities and hardness while maintaining sample integrity.
Han, Yanxi; Li, Jinming
2017-10-26
In this era of precision medicine, molecular biology is becoming increasingly significant for the diagnosis and therapeutic management of non-small cell lung cancer. The specimen as the primary element of the whole testing flow is particularly important for maintaining the accuracy of gene alteration testing. Presently, the main sample types applied in routine diagnosis are tissue and cytology biopsies. Liquid biopsies are considered as the most promising alternatives when tissue and cytology samples are not available. Each sample type possesses its own strengths and weaknesses, pertaining to the disparity of sampling, preparation and preservation procedures, the heterogeneity of inter- or intratumors, the tumor cellularity (percentage and number of tumor cells) of specimens, etc., and none of them can individually be a "one size to fit all". Therefore, in this review, we summarized the strengths and weaknesses of different sample types that are widely used in clinical practice, offered solutions to reduce the negative impact of the samples and proposed an optimized strategy for choice of samples during the entire diagnostic course. We hope to provide valuable information to laboratories for choosing optimal clinical specimens to achieve comprehensive functional genomic landscapes and formulate individually tailored treatment plans for NSCLC patients that are in advanced stages.
Determination of the optimal sample size for a clinical trial accounting for the population size.
Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin
2017-07-01
The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Kasza, J; Hemming, K; Hooper, R; Matthews, Jns; Forbes, A B
2017-01-01
Stepped wedge and cluster randomised crossover trials are examples of cluster randomised designs conducted over multiple time periods that are being used with increasing frequency in health research. Recent systematic reviews of both of these designs indicate that the within-cluster correlation is typically taken account of in the analysis of data using a random intercept mixed model, implying a constant correlation between any two individuals in the same cluster no matter how far apart in time they are measured: within-period and between-period intra-cluster correlations are assumed to be identical. Recently proposed extensions allow the within- and between-period intra-cluster correlations to differ, although these methods require that all between-period intra-cluster correlations are identical, which may not be appropriate in all situations. Motivated by a proposed intensive care cluster randomised trial, we propose an alternative correlation structure for repeated cross-sectional multiple-period cluster randomised trials in which the between-period intra-cluster correlation is allowed to decay depending on the distance between measurements. We present results for the variance of treatment effect estimators for varying amounts of decay, investigating the consequences of the variation in decay on sample size planning for stepped wedge, cluster crossover and multiple-period parallel-arm cluster randomised trials. We also investigate the impact of assuming constant between-period intra-cluster correlations instead of decaying between-period intra-cluster correlations. Our results indicate that in certain design configurations, including the one corresponding to the proposed trial, a correlation decay can have an important impact on variances of treatment effect estimators, and hence on sample size and power. An R Shiny app allows readers to interactively explore the impact of correlation decay.
Hayabusa2 Sampler: Collection of Asteroidal Surface Material
NASA Astrophysics Data System (ADS)
Sawada, Hirotaka; Okazaki, Ryuji; Tachibana, Shogo; Sakamoto, Kanako; Takano, Yoshinori; Okamoto, Chisato; Yano, Hajime; Miura, Yayoi; Abe, Masanao; Hasegawa, Sunao; Noguchi, Takaaki
2017-07-01
Japan Aerospace Exploration Agency (JAXA) launched the asteroid exploration probe "Hayabusa2" in December 3rd, 2014, following the 1st Hayabusa mission. With technological and scientific improvements from the Hayabusa probe, we plan to visit the C-type asteroid 162137 Ryugu (1999 JU3), and to sample surface materials of the C-type asteroid that is likely to be different from the S-type asteroid Itokawa and contain more pristine materials, including organic matter and/or hydrated minerals, than S-type asteroids. We developed the Hayabusa2 sampler to collect a minimum of 100 mg of surface samples including several mm-sized particles at three surface locations without any severe terrestrial contamination. The basic configuration of the sampler design is mainly as same as the 1st Hayabusa (Yano et al. in Science, 312(5778):1350-1353, 2006), with several minor but important modifications based on lessons learned from the Hayabusa to fulfill the scientific requirements and to raise the scientific value of the returned samples.
Mercurio, Mariano; Rossi, Manuela; Izzo, Francesco; Cappelletti, Piergiulio; Germinario, Chiara; Grifa, Celestino; Petrelli, Maurizio; Vergara, Alessandro; Langella, Alessio
2018-02-01
Fourteen samples of tourmaline from the Real Museo Mineralogico of Federico II University (Naples) have been characterized through multi-methodological investigations (EMPA-WDS, SEM-EDS, LA-ICP-MS, and FT-IR spectroscopy). The samples show different size, morphology and color, and are often associated with other minerals. Data on major and minor elements allowed to identify and classify tourmalines as follows: elbaites, tsilaisite, schorl, dravites, uvites and rossmanite. Non-invasive, non-destructive FT-IR and in-situ analyses were carried out on the same samples to validate this chemically-based identification and classification. The results of this research show that a complete characterization of this mineral species, usually time-consuming and expensive, can be successfully achieved through non-destructive FT-IR technique, thus representing a reliable tool for a fast classification extremely useful to plan further analytical strategies, as well as to support gemological appraisals. Copyright © 2017 Elsevier B.V. All rights reserved.
Metropolitan planning organizations in Texas : overview and profiles : final report.
DOT National Transportation Integrated Search
2017-04-01
A metropolitan planning organization (MPO) has authority and responsibility for regional transportation planning in urbanized areas where the population is at least 50,000 and surrounding areas meet size/density criteria determined by the U.S. Census...
Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis
Adnan, Tassha Hilda
2016-01-01
Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446
7 CFR 43.106 - Choosing AQL's and sampling plans.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Choosing AQL's and sampling plans. 43.106 Section 43.106 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY STANDARDS AND STANDARD CONTAINER REGULATIONS STANDARDS FOR SAMPLING PLANS...
Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M
2018-06-01
Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.
SU-F-T-574: MLC Based SRS Beam Commissioning - Minimum Target Size Investigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zakikhani, R; Able, C
2016-06-15
Purpose: To implement a MLC accelerator based SRS program using small fields down to 1 cm × 1 cm and to determine the smallest target size safe for clinical treatment. Methods: Computerized beam scanning was performed in water using a diode detector and a linac-head attached transmission ion chamber to characterize the small field dosimetric aspects of a 6 MV photon beam (Trilogy-Varian Medical Systems, Inc.). The output factors, PDD and profiles of field sizes 1, 2, 3, 4, and 10 cm{sup 2} were measured and utilized to create a new treatment planning system (TPS) model (AAA ver 11021). Staticmore » MLC SRS treatment plans were created and delivered to a homogeneous phantom (Cube 20, CIRS, Inc.) for a 1.0 cm and 1.5 cm “PTV” target. A 12 field DMLC plan was created for a 2.1 cm target. Radiochromic film (EBT3, Ashland Inc.) was used to measure the planar dose in the axial, coronal and sagittal planes. A micro ion chamber (0.007 cc) was used to measure the dose at isocenter for each treatment delivery. Results: The new TPS model was validated by using a tolerance criteria of 2% dose and 2 mm distance to agreement. For fields ≤ 3 cm{sup 2}, the max PDD, Profile and OF difference was 0.9%, 2%/2mm and 1.4% respectively. The measured radiochromic film planar dose distributions had gamma scores of 95.3% or higher using a 3%/2mm criteria. Ion chamber measurements for all 3 test plans effectively met our goal of delivering the dose accurately to within 5% when compared to the expected dose reported by the TPS (1 cm plan Δ= −5.2%, 1.5 cm plan Δ= −2.0%, 2 cm plan Δ= 1.5%). Conclusion: End to end testing confirmed that MLC defined SRS for target sizes ≥ 1.0 cm can be safely planned and delivered.« less
Cooke, Richard; French, David P
2008-01-01
Meta-analysis was used to quantify how well the Theories of Reasoned Action and Planned Behaviour have predicted intentions to attend screening programmes and actual attendance behaviour. Systematic literature searches identified 33 studies that were included in the review. Across the studies as a whole, attitudes had a large-sized relationship with intention, while subjective norms and perceived behavioural control (PBC) possessed medium-sized relationships with intention. Intention had a medium-sized relationship with attendance, whereas the PBC-attendance relationship was small sized. Due to heterogeneity in results between studies, moderator analyses were conducted. The moderator variables were (a) type of screening test, (b) location of recruitment, (c) screening cost and (d) invitation to screen. All moderators affected theory of planned behaviour relationships. Suggestions for future research emerging from these results include targeting attitudes to promote intention to screen, a greater use of implementation intentions in screening information and examining the credibility of different screening providers.
PCB Analysis Plan for Tank Archive Samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
NGUYEN, D.M.
2001-03-22
This analysis plan specifies laboratory analysis, quality assurance/quality control (QA/QC), and data reporting requirements for analyzing polychlorinated biphenyls (PCB) concentrations in archive samples. Tank waste archive samples that are planned for PCB analysis are identified in Nguyen 2001. The tanks and samples are summarized in Table 1-1. The analytical data will be used to establish a PCB baseline inventory in Hanford tanks.
Use of protected activity centers by Mexican Spotted Owls in the Sacramento Mountains, New Mexico
Joseph L. Ganey; James P. Ward; Jeffrey S. Jenness; William M. Block; Shaula Hedwall; Ryan S. Jonnes; Darrell L. Apprill; Todd A. Rawlinson; Sean C. Kyle; Steven L. Spangle
2014-01-01
A Recovery Plan developed for the threatened Mexican Spotted Owl (Strix occidentalis lucida) recommended designating Protected Activity Centers (PACs) with a minimum size of 243 ha to conserve core use areas of territorial owls. The plan assumed that areas of this size would protect " the nest site, several roost sites, and the most proximal and highly-used...
Hagell, Peter; Westergren, Albert
Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).
Development of a Searchable Database of Cryoablation Simulations for Use in Treatment Planning.
Boas, F Edward; Srimathveeravalli, Govindarajan; Durack, Jeremy C; Kaye, Elena A; Erinjeri, Joseph P; Ziv, Etay; Maybody, Majid; Yarmohammadi, Hooman; Solomon, Stephen B
2017-05-01
To create and validate a planning tool for multiple-probe cryoablation, using simulations of ice ball size and shape for various ablation probe configurations, ablation times, and types of tissue ablated. Ice ball size and shape was simulated using the Pennes bioheat equation. Five thousand six hundred and seventy different cryoablation procedures were simulated, using 1-6 cryoablation probes and 1-2 cm spacing between probes. The resulting ice ball was measured along three perpendicular axes and recorded in a database. Simulated ice ball sizes were compared to gel experiments (26 measurements) and clinical cryoablation cases (42 measurements). The clinical cryoablation measurements were obtained from a HIPAA-compliant retrospective review of kidney and liver cryoablation procedures between January 2015 and February 2016. Finally, we created a web-based cryoablation planning tool, which uses the cryoablation simulation database to look up the probe spacing and ablation time that produces the desired ice ball shape and dimensions. Average absolute error between the simulated and experimentally measured ice balls was 1 mm in gel experiments and 4 mm in clinical cryoablation cases. The simulations accurately predicted the degree of synergy in multiple-probe ablations. The cryoablation simulation database covers a wide range of ice ball sizes and shapes up to 9.8 cm. Cryoablation simulations accurately predict the ice ball size in multiple-probe ablations. The cryoablation database can be used to plan ablation procedures: given the desired ice ball size and shape, it will find the number and type of probes, probe configuration and spacing, and ablation time required.
Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M
2018-04-01
A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Carrier, B. L.; Beaty, D. W.
2017-12-01
NASA's Mars 2020 rover is scheduled to land on Mars in 2021 and will be equipped with a sampling system capable of collecting rock cores, as well as a specialized drill bit for collecting unconsolidated granular material. A key mission objective is to collect a set of samples that have enough scientific merit to justify returning to Earth. In the case of granular materials, we would like to catalyze community discussion on what we would do with these samples if they arrived in our laboratories, as input to decision-making related to sampling the regolith. Numerous scientific objectives have been identified which could be achieved or significantly advanced via the analysis of martian rocks, "regolith," and gas samples. The term "regolith" has more than one definition, including one that is general and one that is much more specific. For the purpose of this analysis we use the term "granular materials" to encompass the most general meaning and restrict "regolith" to a subset of that. Our working taxonomy includes the following: 1) globally sourced airfall dust (dust); 2) saltation-sized particles (sand); 3) locally sourced decomposed rock (regolith); 4) crater ejecta (ejecta); and, 5) other. Analysis of martian granular materials could serve to advance our understanding areas including habitability and astrobiology, surface-atmosphere interactions, chemistry, mineralogy, geology and environmental processes. Results of these analyses would also provide input into planning for future human exploration of Mars, elucidating possible health and mechanical hazards caused by the martian surface material, as well as providing valuable information regarding available resources for ISRU and civil engineering purposes. Results would also be relevant to matters of planetary protection and ground-truthing orbital observations. We will present a preliminary analysis of the following, in order to generate community discussion and feedback on all issues relating to: What are the specific reasons (and their priorities) for collecting samples of granular materials? How do those reasons translate to sampling priorities? In what condition would these samples be expected to be received? What is our best projection of the approach by which these samples would be divided, prepared, and analyzed to achieve our objectives?
Space Weathering of Intermediate-Size Soil Grains in Immature Apollo 17 Soil 71061
NASA Technical Reports Server (NTRS)
Wentworth, S. J.; Robinson, G. A.; McKay, D. S.
2005-01-01
Understanding space weathering, which is caused by micrometeorite impacts, implantation of solar wind gases, radiation damage, chemical effects from solar particles and cosmic rays, interactions with the lunar atmosphere, and sputter erosion and deposition, continues to be a primary objective of lunar sample research. Electron beam studies of space weathering have focused on space weathering effects on individual glasses and minerals from the finest size fractions of lunar soils [1] and patinas on lunar rocks [2]. We are beginning a new study of space weathering of intermediate-size individual mineral grains from lunar soils. For this initial work, we chose an immature soil (see below) in order to maximize the probability that some individual grains are relatively unweathered. The likelihood of identifying a range of relatively unweathered grains in a mature soil is low, and we plan to study grains ranging from pristine to highly weathered in order to determine the progression of space weathering. Future studies will include grains from mature soils. We are currently in the process of documenting splash glass, glass pancakes, craters, and accretionary particles (glass and mineral grains) on plagioclase from our chosen soil using high-resolution field emission scanning electron microscopy (FESEM). These studies are being done concurrently with our studies of patinas on larger lunar rocks [e.g., 3]. One of our major goals is to correlate the evidence for space weathering observed in studies of the surfaces of samples with the evidence demonstrated at higher resolution (TEM) using cross-sections of samples. For example, TEM studies verified the existence of vapor deposits on soil grains [1]; we do not yet know if they can be readily distinguished by surfaces studies of samples. A wide range of textures of rims on soil grains is also clear in TEM [1]; might it be possible to correlate them with specific characteristics of weathering features seen in SEM?
1991-02-01
to adequately assess the health and environmental risks associated with the closure and transfer of the Annex forI other use; and 3) identification of...1990); Draft Final Technical Plan, Draft Final Sampling Design Plan and Draft Final Health and Safety Plan, USATHAMA, June 1990. 2.1.2 Draft Final...Final Technical Plan, Sampling Design Plan and Health and Safety Plan) supplied by USATHAMA. The estimate may be revised, with USATHAMA approval, as
Silva, Alisson R; Rodrigues-Silva, Nilson; Pereira, Poliana S; Sarmento, Renato A; Costa, Thiago L; Galdino, Tarcísio V S; Picanço, Marcelo C
2017-12-05
The common blossom thrips, Frankliniella schultzei Trybom (Thysanoptera: Thripidae), is an important lettuce pest worldwide. Conventional sampling plans are the first step in implementing decision-making systems into integrated pest management programs. However, this tool is not available for F. schultzei infesting lettuce crops. Thus, the objective of this work was to develop a conventional sampling plan for F. schultzei in lettuce crops. Two sampling techniques (direct counting and leaf beating on a white plastic tray) were compared in crisphead, looseleaf, and Boston lettuce varieties before and during head formation. The frequency distributions of F. schultzei densities in lettuce crops were assessed, and the number of samples required to compose the sampling plan was determined. Leaf beating on a white plastic tray was the best sampling technique. F. schultzei densities obtained with this technique were fitted to the negative binomial distribution with a common aggregation parameter (common K = 0.3143). The developed sampling plan is composed of 91 samples per field and presents low errors in its estimates (up to 20%), fast execution time (up to 47 min), and low cost (up to US $1.67 per sampling area). This sampling plan can be used as a tool for integrated pest management in lettuce crops, assisting with reliable decision making in different lettuce varieties before and during head formation. © The Author(s) 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moteabbed, Maryam, E-mail: mmoteabbed@partners.org; Yock, Torunn I.; Depauw, Nicolas
Purpose: This study aimed to assess the clinical impact of spot size and the addition of apertures and range compensators on the treatment quality of pencil beam scanning (PBS) proton therapy and to define when PBS could improve on passive scattering proton therapy (PSPT). Methods and Materials: The patient cohort included 14 pediatric patients treated with PSPT. Six PBS plans were created and optimized for each patient using 3 spot sizes (∼12-, 5.4-, and 2.5-mm median sigma at isocenter for 90- to 230-MeV range) and adding apertures and compensators to plans with the 2 larger spots. Conformity and homogeneity indices,more » dose-volume histogram parameters, equivalent uniform dose (EUD), normal tissue complication probability (NTCP), and integral dose were quantified and compared with the respective PSPT plans. Results: The results clearly indicated that PBS with the largest spots does not necessarily offer a dosimetric or clinical advantage over PSPT. With comparable target coverage, the mean dose (D{sub mean}) to healthy organs was on average 6.3% larger than PSPT when using this spot size. However, adding apertures to plans with large spots improved the treatment quality by decreasing the average D{sub mean} and EUD by up to 8.6% and 3.2% of the prescribed dose, respectively. Decreasing the spot size further improved all plans, lowering the average D{sub mean} and EUD by up to 11.6% and 10.9% compared with PSPT, respectively, and eliminated the need for beam-shaping devices. The NTCP decreased with spot size and addition of apertures, with maximum reduction of 5.4% relative to PSPT. Conclusions: The added benefit of using PBS strongly depends on the delivery configurations. Facilities limited to large spot sizes (>∼8 mm median sigma at isocenter) are recommended to use apertures to reduce treatment-related toxicities, at least for complex and/or small tumors.« less
Power Distribution System Planning with GIS Consideration
NASA Astrophysics Data System (ADS)
Wattanasophon, Sirichai; Eua-Arporn, Bundhit
This paper proposes a method for solving radial distribution system planning problems taking into account geographical information. The proposed method can automatically determine appropriate location and size of a substation, routing of feeders, and sizes of conductors while satisfying all constraints, i.e. technical constraints (voltage drop and thermal limit) and geographical constraints (obstacle, existing infrastructure, and high-cost passages). Sequential quadratic programming (SQP) and minimum path algorithm (MPA) are applied to solve the planning problem based on net price value (NPV) consideration. In addition this method integrates planner's experience and optimization process to achieve an appropriate practical solution. The proposed method has been tested with an actual distribution system, from which the results indicate that it can provide satisfactory plans.
Appendix E - Sample Production Facility Plan
This sample Spill Prevention, Control and Countermeasure (SPCC) Plan in Appendix E is intended to provide examples and illustrations of how a production facility could address a variety of scenarios in its SPCC Plan.
Lee, W Anthony
2007-01-01
The gold standard for preoperative evaluation of an aortic aneurysm is a computed tomography angiogram (CTA). Three-dimensional reconstruction and analysis of the computed tomography data set is enormously helpful, and even sometimes essential, in proper sizing and planning for endovascular stent graft repair. To a large extent, it has obviated the need for conventional angiography for morphologic evaluation. The TeraRecon Aquarius workstation (San Mateo, Calif) represents a highly sophisticated but user-friendly platform utilizing a combination of task-specific hardware and software specifically designed to rapidly manipulate large Digital Imaging and Communications in Medicine (DICOM) data sets and provide surface-shaded and multiplanar renderings in real-time. This article discusses the basics of sizing and planning for endovascular abdominal aortic aneurysm repair and the role of 3-dimensional analysis using the TeraRecon workstation.
Hagos, Goshu; Tura, Gurmesa; Kahsay, Gizienesh; Haile, Kebede; Grum, Teklit; Araya, Tsige
2018-06-05
Abortion remains among the leading causes of maternal death worldwide. Post-abortion contraception is significantly effective in preventing unintended pregnancy and abortion if provided before women leave the health facilty. However, the status of post-abortion family planning (PAFP) utilization and the contributing factors are not well studied in Tigray region. So, we conduct study aimed on family planning utilization and factors associated with it among women receiving abortion services. A facility based cross-sectional study design was conducted among women receiving abortion services in central zone of Tigray from December 2015to February 2016 using a total of 416 sample size. Women who came for abortion services were selected using systematic random sampling technique.. The data were collected using a pre-tested interviewer administered questionnair. Data were coded and entered in to Epi info 7 and then exported to SPSS for analysis. Descriptive statisticslike frequencies and mean were computed to display the results. Both Bivariable and multivariable logistic regression was used in the analysis. Variables statistically significant at p < 0.05 in the bivariable analysis were checked in multivariable logistic regration to identify independently associated factors. Then variables which were significantly associated with post abortion family planning utilization at p-value < 0.05 in the multivariable analysis were declared as significantly associated factors. A total of 409 abortion clients were interviewed in this study with 98.3% of response rate. Majority 290 (70.9%) of study participants utilized contracepives after abortion. Type of health facility, the decision maker on timing of having child, knowledge that pregnancy can happen soon after abortion and husband's opposition towards contraceptives were significantly associated with Post-abortion family planning ustilization. About one-third of abortion women failed to receive contraceptive before leaving the facility. Private facilities should strengthen utilization of contraceptives on post abortion care service. Health providers should provide counseling on timing of fertility-return following abortion before women left the facility once they receive abortion care. Women empowerment through enhancing community's awareness focusing on own decision making in the family planning utilization including the partner should be strengthened.
DOT National Transportation Integrated Search
1988-09-01
THIS GUIDE FOR DEVELOPERS, BUILDING OWNERS AND BUILDING MANAGERS IS ONE IN A SERIES OF SAMPLES OF TDM PLANS THAT ILLUSTRATE THE DESIGN AND PROPOSED APPLICATION OF TDM STRATEGIES. THIS SAMPLE PLAN WAS PREPARED FOR A FICTITIOUS BUILDING MANAGER NEAR DO...
TH-C-12A-04: Dosimetric Evaluation of a Modulated Arc Technique for Total Body Irradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsiamas, P; Czerminska, M; Makrigiorgos, G
2014-06-15
Purpose: A simplified Total Body Irradiation (TBI) was developed to work with minimal requirements in a compact linac room without custom motorized TBI couch. Results were compared to our existing fixed-gantry double 4 MV linac TBI system with prone patient and simultaneous AP/PA irradiation. Methods: Modulated arc irradiates patient positioned in prone/supine positions along the craniocaudal axis. A simplified inverse planning method developed to optimize dose rate as a function of gantry angle for various patient sizes without the need of graphical 3D treatment planning system. This method can be easily adapted and used with minimal resources. Fixed maximum fieldmore » size (40×40 cm2) is used to decrease radiation delivery time. Dose rate as a function of gantry angle is optimized to result in uniform dose inside rectangular phantoms of various sizes and a custom VMAT DICOM plans were generated using a DICOM editor tool. Monte Carlo simulations, film and ionization chamber dosimetry for various setups were used to derive and test an extended SSD beam model based on PDD/OAR profiles for Varian 6EX/ TX. Measurements were obtained using solid water phantoms. Dose rate modulation function was determined for various size patients (100cm − 200cm). Depending on the size of the patient arc range varied from 100° to 120°. Results: A PDD/OAR based beam model for modulated arc TBI therapy was developed. Lateral dose profiles produced were similar to profiles of our existing TBI facility. Calculated delivery time and full arc depended on the size of the patient (∼8min/ 100° − 10min/ 120°, 100 cGy). Dose heterogeneity varied by about ±5% − ±10% depending on the patient size and distance to the surface (buildup region). Conclusion: TBI using simplified modulated arc along craniocaudal axis of different size patients positioned on the floor can be achieved without graphical / inverse 3D planning.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pokharel, S; Rana, S
Purpose: purpose of this study is to evaluate the effect of grid size in Eclipse AcurosXB dose calculation algorithm for SBRT lung. Methods: Five cases of SBRT lung previously treated have been chosen for present study. Four of the plans were 5 fields conventional IMRT and one was Rapid Arc plan. All five cases have been calculated with five grid sizes (1, 1.5, 2, 2.5 and 3mm) available for AXB algorithm with same plan normalization. Dosimetric indices relevant to SBRT along with MUs and time have been recorded for different grid sizes. The maximum difference was calculated as a percentagemore » of mean of all five values. All the plans were IMRT QAed with portal dosimetry. Results: The maximum difference of MUs was within 2%. The time increased was as high as 7 times from highest 3mm to lowest 1mm grid size. The largest difference of PTV minimum, maximum and mean dose were 7.7%, 1.5% and 1.6% respectively. The highest D2-Max difference was 6.1%. The highest difference in ipsilateral lung mean, V5Gy, V10Gy and V20Gy were 2.6%, 2.4%, 1.9% and 3.8% respectively. The maximum difference of heart, cord and esophagus dose were 6.5%, 7.8% and 4.02% respectively. The IMRT Gamma passing rate at 2%/2mm remains within 1.5% with at least 98% points passing with all grid sizes. Conclusion: This work indicates the lowest grid size of 1mm available in AXB is not necessarily required for accurate dose calculation. The IMRT passing rate was insignificant or not observed with the reduction of grid size less than 2mm. Although the maximum percentage difference of some of the dosimetric indices appear large, most of them are clinically insignificant in absolute dose values. So we conclude that 2mm grid size calculation is best compromise in light of dose calculation accuracy and time it takes to calculate dose.« less
Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R
2017-09-14
While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.
Trends and Issues in U.S. Navy Manpower
1985-01-01
Planning (ADSTAP) system7, consists of several subsystems and models for planning and managing enlisted manpower, personnel, and training. It was... models to provide information for formulating goals and planning the transition from current inventory to estab- lished objectives 9 Operational...planning models to provide information for formulating operating plans to control the size and quality (ratings or skills and pay grades) of the active-duty
Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.
You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary
2011-02-01
The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.
Animal research as a basis for clinical trials.
Faggion, Clovis M
2015-04-01
Animal experiments are critical for the development of new human therapeutics because they provide mechanistic information, as well as important information on efficacy and safety. Some evidence suggests that authors of animal research in dentistry do not observe important methodological issues when planning animal experiments, for example sample-size calculation. Low-quality animal research directly interferes with development of the research process in which multiple levels of research are interconnected. For example, high-quality animal experiments generate sound information for the further planning and development of randomized controlled trials in humans. These randomized controlled trials are the main source for the development of systematic reviews and meta-analyses, which will generate the best evidence for the development of clinical guidelines. Therefore, adequate planning of animal research is a sine qua non condition for increasing efficacy and efficiency in research. Ethical concerns arise when animal research is not performed with high standards. This Focus article presents the latest information on the standards of animal research in dentistry, more precisely in the field of implant dentistry. Issues on precision and risk of bias are discussed, and strategies to reduce risk of bias in animal research are reported. © 2015 Eur J Oral Sci.
Damiani, Lucas Petri; Berwanger, Otavio; Paisani, Denise; Laranjeira, Ligia Nasi; Suzumura, Erica Aranha; Amato, Marcelo Britto Passos; Carvalho, Carlos Roberto Ribeiro; Cavalcanti, Alexandre Biasi
2017-01-01
Background The Alveolar Recruitment for Acute Respiratory Distress Syndrome Trial (ART) is an international multicenter randomized pragmatic controlled trial with allocation concealment involving 120 intensive care units in Brazil, Argentina, Colombia, Italy, Poland, Portugal, Malaysia, Spain, and Uruguay. The primary objective of ART is to determine whether maximum stepwise alveolar recruitment associated with PEEP titration, adjusted according to the static compliance of the respiratory system (ART strategy), is able to increase 28-day survival in patients with acute respiratory distress syndrome compared to conventional treatment (ARDSNet strategy). Objective To describe the data management process and statistical analysis plan. Methods The statistical analysis plan was designed by the trial executive committee and reviewed and approved by the trial steering committee. We provide an overview of the trial design with a special focus on describing the primary (28-day survival) and secondary outcomes. We describe our data management process, data monitoring committee, interim analyses, and sample size calculation. We describe our planned statistical analyses for primary and secondary outcomes as well as pre-specified subgroup analyses. We also provide details for presenting results, including mock tables for baseline characteristics, adherence to the protocol and effect on clinical outcomes. Conclusion According to best trial practice, we report our statistical analysis plan and data management plan prior to locking the database and beginning analyses. We anticipate that this document will prevent analysis bias and enhance the utility of the reported results. Trial registration ClinicalTrials.gov number, NCT01374022. PMID:28977255
Liu, Chenbin; Schild, Steven E; Chang, Joe Y; Liao, Zhongxing; Korte, Shawn; Shen, Jiajian; Ding, Xiaoning; Hu, Yanle; Kang, Yixiu; Keole, Sameer R; Sio, Terence T; Wong, William W; Sahoo, Narayan; Bues, Martin; Liu, Wei
2018-06-01
To investigate how spot size and spacing affect plan quality, robustness, and interplay effects of robustly optimized intensity modulated proton therapy (IMPT) for lung cancer. Two robustly optimized IMPT plans were created for 10 lung cancer patients: first by a large-spot machine with in-air energy-dependent large spot size at isocenter (σ: 6-15 mm) and spacing (1.3 σ), and second by a small-spot machine with in-air energy-dependent small spot size (σ: 2-6 mm) and spacing (5 mm). Both plans were generated by optimizing radiation dose to internal target volume on averaged 4-dimensional computed tomography scans using an in-house-developed IMPT planning system. The dose-volume histograms band method was used to evaluate plan robustness. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effects with randomized starting phases for each field per fraction. Patient anatomy voxels were mapped phase-to-phase via deformable image registration, and doses were scored using in-house-developed software. Dose-volume histogram indices, including internal target volume dose coverage, homogeneity, and organs at risk (OARs) sparing, were compared using the Wilcoxon signed-rank test. Compared with the large-spot machine, the small-spot machine resulted in significantly lower heart and esophagus mean doses, with comparable target dose coverage, homogeneity, and protection of other OARs. Plan robustness was comparable for targets and most OARs. With interplay effects considered, significantly lower heart and esophagus mean doses with comparable target dose coverage and homogeneity were observed using smaller spots. Robust optimization with a small spot-machine significantly improves heart and esophagus sparing, with comparable plan robustness and interplay effects compared with robust optimization with a large-spot machine. A small-spot machine uses a larger number of spots to cover the same tumors compared with a large-spot machine, which gives the planning system more freedom to compensate for the higher sensitivity to uncertainties and interplay effects for lung cancer treatments. Copyright © 2018 Elsevier Inc. All rights reserved.
Exploratory Study of Web-Based Planning and Mobile Text Reminders in an Overweight Population
Murray, Peter; Cobain, Mark; Chinapaw, Mai; van Mechelen, Willem; Hurling, Robert
2011-01-01
Background Forming specific health plans can help translate good intentions into action. Mobile text reminders can further enhance the effects of planning on behavior. Objective Our aim was to explore the combined impact of a Web-based, fully automated planning tool and mobile text reminders on intention to change saturated fat intake, self-reported saturated fat intake, and portion size changes over 4 weeks. Methods Of 1013 men and women recruited online, 858 were randomly allocated to 1 of 3 conditions: a planning tool (PT), combined planning tool and text reminders (PTT), and a control group. All outcome measures were assessed by online self-reports. Analysis of covariance was used to analyze the data. Results Participants allocated to the PT (meansat urated fat 3.6, meancopingplanning 3) and PTT (meansaturatedfat 3.5, meancopingplanning 3.1) reported a lower consumption of high-fat foods (F 2,571 = 4.74, P = .009) and higher levels of coping planning (F 2,571 = 7.22, P < .001) than the control group (meansat urated f at 3.9, meancopingplanning 2.8). Participants in the PTT condition also reported smaller portion sizes of high-fat foods (mean 2.8; F 2, 569 = 4.12, P = .0) than the control group (meanportions 3.1). The reduction in portion size was driven primarily by the male participants in the PTT (P = .003). We found no significant group differences in terms of percentage saturated fat intake, intentions, action planning, self-efficacy, or feedback on the intervention. Conclusions These findings support the use of Web-based tools and mobile technologies to change dietary behavior. The combination of a fully automated Web-based planning tool with mobile text reminders led to lower self-reported consumption of high-fat foods and greater reductions in portion sizes than in a control condition. Trial Registration International Standard Randomized Controlled Trial Number (ISRCTN): 61819220; http://www.controlled-trials.com/ISRCTN61819220 (Archived by WebCite at http://www.webcitation.org/63YiSy6R8) PMID:22182483
2011-01-01
Purpose To verify the dose distribution and number of monitor units (MU) for dynamic treatment techniques like volumetric modulated single arc radiation therapy - Rapid Arc - each patient treatment plan has to be verified prior to the first treatment. The purpose of this study was to develop a patient related treatment plan verification protocol using a two dimensional ionization chamber array (MatriXX, IBA, Schwarzenbruck, Germany). Method Measurements were done to determine the dependence between response of 2D ionization chamber array, beam direction, and field size. Also the reproducibility of the measurements was checked. For the patient related verifications the original patient Rapid Arc treatment plan was projected on CT dataset of the MatriXX and the dose distribution was calculated. After irradiation of the Rapid Arc verification plans measured and calculated 2D dose distributions were compared using the gamma evaluation method implemented in the measuring software OmniPro (version 1.5, IBA, Schwarzenbruck, Germany). Results The dependence between response of 2D ionization chamber array, field size and beam direction has shown a passing rate of 99% for field sizes between 7 cm × 7 cm and 24 cm × 24 cm for measurements of single arc. For smaller and larger field sizes than 7 cm × 7 cm and 24 cm × 24 cm the passing rate was less than 99%. The reproducibility was within a passing rate of 99% and 100%. The accuracy of the whole process including the uncertainty of the measuring system, treatment planning system, linear accelerator and isocentric laser system in the treatment room was acceptable for treatment plan verification using gamma criteria of 3% and 3 mm, 2D global gamma index. Conclusion It was possible to verify the 2D dose distribution and MU of Rapid Arc treatment plans using the MatriXX. The use of the MatriXX for Rapid Arc treatment plan verification in clinical routine is reasonable. The passing rate should be 99% than the verification protocol is able to detect clinically significant errors. PMID:21342509
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langner, U; Langen, K
Purpose: To investigate the effect of spot size variation as function of gantry angle on the quality of treatment plans for pencil beam scanning proton plans. Method: Three homogeneous 26×26×7cm dose volumes with different ranges and SOBPs were delivered on the matrixxPT 2D array at gantry angles of 0 and 270 degrees. The spot size sigma varies by 1.8, 7.8, and 1.4%, for nominal energies of 215, 183, and 103 MeV (Range 29, 22, and 8cm, respectively). The resulting dose planes are compared and evaluated with the gamma index for 2%/2mm and 1%/1mm criteria. Results: Patient specific QA is performedmore » at a gantry angle of 0 degrees. However, beam sigmas vary as function of gantry angle because of the beam optics for each gantry. This will cause differences between the delivered and planned treatment plans. Delivered plans were compared and a gamma pass rate of 96.5% for criteria of 2%/2mm and 91.4% for 1%/1mm were seen for plans with a nominal energy of 183 MeV. For plans with a nominal energy of 103 MeV, gamma pass rates of 97.3% for 2%/2mm and 91.5% for 1%/1mm were seen. For plans with a nominal energy of 215 MeV the pass rate was 99.8% for 1%/1mm between the two gantry angles. Conclusion: Differences in beam sigma of up to 7.8% do not cause significant differences in the dose distribution of different spot size gammas.« less
Burton, Deron C; Confield, Evan; Gasner, Mary Rose; Weisfuse, Isaac
2011-10-01
Small businesses need to engage in continuity planning to assure delivery of goods and services and to sustain the economy during an influenza pandemic. This is especially true in New York City, where 98 per cent of businesses have fewer than 100 employees. It was an objective therefore, to determine pandemic influenza business continuity practices and strategies suitable for small and medium-sized NYC businesses. The study design used focus groups, and the participants were owners and managers of businesses with fewer than 500 employees in New York City. The main outcome measures looked for were the degree of pandemic preparedness, and the feasibility of currently proposed business continuity strategies. Most participants reported that their businesses had no pandemic influenza plan. Agreement with feasibility of specific business continuity strategies was influenced by the type of business represented, cost of the strategy, and business size. It was concluded that recommendations for pandemic-related business continuity plans for small and medium-sized businesses should be tailored to the type and size of business and should highlight the broad utility of the proposed strategies to address a range of business stressors.
Planning Ahead: Influence of Figure Orientation on Size of Head in Children's Drawings of a Man.
ERIC Educational Resources Information Center
Willatts, Peter; Dougal, Shonagh
In an investigation of causes of the disproportionate relation between head and body in children's drawings of the human figure, 160 children of 3-10 years of age produced drawings of a man viewed from the front and from the back. It was expected that if planning to include facial features increased the size of the head children drew, then heads…
Kutzner, Karl Philipp; Pfeil, Joachim; Kovacevic, Mark Predrag
2017-07-01
Modern total hip arthroplasty is largely dependent on the successful preservation of hip geometry. Thus, a successful implementation of the preoperative planning is of great importance. The present study evaluates the accuracy of anatomic hip reconstruction predicted by 2D digital planning using a calcar-guided short stem of the newest generation. A calcar-guided short stem was implanted in 109 patients in combination with a cementless cup using the modified anterolateral approach. Preoperative digital planning was performed including implant size, caput-collum-diaphyseal angle, offset, and leg length using mediCAD II software. A coordinate system and individual scale factors were implemented. Postoperative outcome was evaluated accordingly and was compared to the planning. Intraoperatively used stem sizes were within one unit of the planned stem sizes. The postoperative stem alignment showed a minor and insignificant (p = 0.159) mean valgization of 0.5° (SD 3.79°) compared to the planned caput-collum-diaphyseal angles. Compared to the planning, mean femoral offset gained 2.18 (SD 4.24) mm, while acetabular offset was reduced by 0.78 (SD 4.36) mm during implantation resulting in an increased global offset of 1.40 (SD 5.51) mm (p = 0.0094). Postoperative femoroacetabular height increased by a mean of 5.00 (SD 5.98) mm (p < 0.0001) compared to preoperative measures. Two-dimensional digital preoperative planning in calcar-guided short-stem total hip arthroplasty assures a satisfying implementation of the intended anatomy. Valgization, which has been frequently observed in previous short-stem designs, negatively affecting offset, can be avoided. However, surgeons have to be aware of a possible leg lengthening.
Determination of boundaries between ranges of high and low gradient of beam profile.
Wendykier, Jacek; Bieniasiewicz, Marcin; Grządziel, Aleksandra; Jedynak, Tadeusz; Kośniewski, Wiktor; Reudelsdorf, Marta; Wendykier, Piotr
2016-01-01
This work addresses the problem of treatment planning system commissioning by introducing a new method of determination of boundaries between high and low gradient in beam profile. The commissioning of a treatment planning system is a very important task in the radiation therapy. One of the main goals of this task is to compare two field profiles: measured and calculated. Applying points of 80% and 120% of nominal field size can lead to the incorrect determination of boundaries, especially for small field sizes. The method that is based on the beam profile gradient allows for proper assignment of boundaries between high and low gradient regions even for small fields. TRS 430 recommendations for commissioning were used. The described method allows a separation between high and low gradient, because it directly uses the value of the gradient of a profile. For small fields, the boundaries determined by the new method allow a commissioning of a treatment planning system according to the TRS 430, while the point of 80% of nominal field size is already in the high gradient region. The method of determining the boundaries by using the beam profile gradient can be extremely helpful during the commissioning of the treatment planning system for Intensity Modulated Radiation Therapy or for other techniques which require very small field sizes.
Opsahl, Stephen P.; Crow, Cassi L.
2014-01-01
During collection of streambed-sediment samples, additional samples from a subset of three sites (the SAR Elmendorf, SAR 72, and SAR McFaddin sites) were processed by using a 63-µm sieve on one aliquot and a 2-mm sieve on a second aliquot for PAH and n-alkane analyses. The purpose of analyzing PAHs and n-alkanes on a sample containing sand, silt, and clay versus a sample containing only silt and clay was to provide data that could be used to determine if these organic constituents had a greater affinity for silt- and clay-sized particles relative to sand-sized particles. The greater concentrations of PAHs in the <63-μm size-fraction samples at all three of these sites are consistent with a greater percentage of binding sites associated with fine-grained (<63 μm) sediment versus coarse-grained (<2 mm) sediment. The larger difference in total PAHs between the <2-mm and <63-μm size-fraction samples at the SAR Elmendorf site might be related to the large percentage of sand in the <2-mm size-fraction sample which was absent in the <63-μm size-fraction sample. In contrast, the <2-mm size-fraction sample collected from the SAR McFaddin site contained very little sand and was similar in particle-size composition to the <63-μm size-fraction sample.
Appendix F - Sample Contingency Plan
This sample Contingency Plan in Appendix F is intended to provide examples of contingency planning as a reference when a facility determines that the required secondary containment is impracticable, pursuant to 40 CFR §112.7(d).
Appendix D - Sample Bulk Storage Facility Plan
This sample Spill Prevention, Control and Countermeasure (SPCC) Plan in Appendix D is intended to provide examples and illustrations of how a bulk storage facility could address a variety of scenarios in its SPCC Plan.
Study samples are too small to produce sufficiently precise reliability coefficients.
Charter, Richard A
2003-04-01
In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.
Frank R. Thompson; Monica J. Schwalbach
1995-01-01
We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...
34 CFR Appendix A to Subpart N of... - Sample Default Prevention Plan
Code of Federal Regulations, 2011 CFR
2011-07-01
... 34 Education 3 2011-07-01 2011-07-01 false Sample Default Prevention Plan A Appendix A to Subpart N of Part 668 Education Regulations of the Offices of the Department of Education (Continued) OFFICE... Default Rates Appendix A to Subpart N of Part 668—Sample Default Prevention Plan This appendix is provided...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 19 2010-07-01 2010-07-01 false Sampling Plans for Selective Enforcement Auditing of Light-Duty Vehicles XI Appendix XI to Part 86 Protection of Environment ENVIRONMENTAL... Enforcement Auditing of Light-Duty Vehicles 40% AQL Table 1—Sampling Plan Code Letter Annual sales of...
Nicassio, P M
1977-12-01
A study was conducted to determine the way in which stereotypes of machismo and femininity are associated with family size and perceptions of family planning. A total of 144 adults, male and female, from a lower class and an upper middle class urban area in Colombia were asked to respond to photographs of Colombian families varying in size and state of completeness. The study illustrated the critical role of sex-role identity and sex-role organization as variables having an effect on fertility. The lower-class respondents described parents in the photographs as significantly more macho or feminine because of their children than the upper-middle-class subjects did. Future research should attempt to measure when this drive to sex-role identity is strongest, i.e., when men and women are most driven to reproduce in order to "prove" themselves. Both lower- and upper-middle-class male groups considered male dominance in marriage to be directly linked with family size. Perceptions of the use of family planning decreased linearly with family size for both social groups, although the lower-class females attributed more family planning to spouses of large families than upper-middle-class females. It is suggested that further research deal with the ways in which constructs of machismo and male dominance vary between the sexes and among socioeconomic groups and the ways in which they impact on fertility.
McDermott, Molly E.; Wood, Petra B.
2011-01-01
Avian use of even-aged timber harvests is likely affected by stand attributes such as size, amount of edge, and retained basal area, all characteristics that can easily be manipulated in timber harvesting plans. However, few studies have examined their effects during the post-breeding period. We studied the impacts of clearcut, low-leave two-age, and high-leave two-age harvesting on post-breeding birds using transect sampling and mist-netting in north-central West Virginia. In our approach, we studied the effects of these harvest types as well as stand size and edge on species characteristic of both early-successional and mature forest habitats. In 2005–2006, 13 stands ranging from 4 to 10 years post-harvest and 4–21 ha in size were sampled from late June through mid-August. Capture rates and relative abundance were similar among treatments for generalist birds. Early-successional birds had the lowest capture rates and fewer species (∼30% lower), and late-successional birds reached their highest abundance and species totals (double the other treatments) in high-leave two-age stands. Area sensitivity was evident for all breeding habitat groups. Both generalist and late-successional bird captures were negatively related to stand size, but these groups showed no clear edge effects. Mean relative abundance decreased to nearly zero for the latter group in the largest stands. In contrast, early-successional species tended to use stand interiors more often and responded positively to stand size. Capture rates for this group tripled as stand size increased from 4 to 21 ha. Few birds in the forest periphery responded to harvest edge types despite within-stand edge effects evident for several species. To create suitable habitat for early-successional birds, large, non-linear openings with a low retained basal area are ideal, while smaller harvests and increased residual tree retention would provide habitat for more late-successional birds post-breeding. Although our study has identified habitat use patterns for different species in timber harvests, understanding habitat-specific bird survival is needed to help determine the quality of silvicultural harvests for post-breeding birds.
ERIC Educational Resources Information Center
Masilamony, Davadhasan
2010-01-01
As the nonprofit sector continues to grow in size and importance in American society, successful organizations proactively initiate strategic planning so they can be more responsive to changing circumstances, underlying trends, and shifting demands. At times, however, organizations develop elaborate plans that are never implemented. Unfortunately,…
Lahou, Evy; Jacxsens, Liesbeth; Van Landeghem, Filip; Uyttendaele, Mieke
2014-08-01
Food service operations are confronted with a diverse range of raw materials and served meals. The implementation of a microbial sampling plan in the framework of verification of suppliers and their own production process (functionality of their prerequisite and HACCP program), demands selection of food products and sampling frequencies. However, these are often selected without a well described scientifically underpinned sampling plan. Therefore, an approach on how to set-up a focused sampling plan, enabled by a microbial risk categorization of food products, for both incoming raw materials and meals served to the consumers is presented. The sampling plan was implemented as a case study during a one-year period in an institutional food service operation to test the feasibility of the chosen approach. This resulted in 123 samples of raw materials and 87 samples of meal servings (focused on high risk categorized food products) which were analyzed for spoilage bacteria, hygiene indicators and food borne pathogens. Although sampling plans are intrinsically limited in assessing the quality and safety of sampled foods, it was shown to be useful to reveal major non-compliances and opportunities to improve the food safety management system in place. Points of attention deduced in the case study were control of Listeria monocytogenes in raw meat spread and raw fish as well as overall microbial quality of served sandwiches and salads. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frazier, William; Baur, Gary
2015-11-03
The 1998 Interim Long-Term Surveillance Plan for the Cheney Disposal Site Near Grand Junction, Colorado, requires annual monitoring to assess the performance of the disposal cell. Monitoring wells 0731, 0732 and 0733 were sampled as specified in the plan. Sampling and analyses were conducted in accordance with Sampling and Analysis Plan for the U.S. Department of Energy Office of Legacy Management Sites.
Opportunities in SMR Emergency Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moe, Wayne L.
2014-10-01
Using year 2014 cost information gathered from twenty different locations within the current commercial nuclear power station fleet, an assessment was performed concerning compliance costs associated with the offsite emergency Planning Standards contained in 10 CFR 50.47(b). The study was conducted to quantitatively determine the potential cost benefits realized if an emergency planning zone (EPZ) were reduced in size according to the lowered risks expected to accompany small modular reactors (SMR). Licensees are required to provide a technical basis when proposing to reduce the surrounding EPZ size to less than the 10 mile plume exposure and 50 mile ingestion pathwaymore » distances currently being used. To assist licensees in assessing the savings that might be associated with such an action, this study established offsite emergency planning costs in connection with four discrete EPZ boundary distances, i.e., site boundary, 2 miles, 5 miles and 10 miles. The boundary selected by the licensee would be based on where EPA Protective Action Guidelines are no longer likely to be exceeded. Additional consideration was directed towards costs associated with reducing the 50 mile ingestion pathway EPZ. The assessment methodology consisted of gathering actual capital costs and annual operating and maintenance costs for offsite emergency planning programs at the surveyed sites, partitioning them according to key predictive factors, and allocating those portions to individual emergency Planning Standards as a function of EPZ size. Two techniques, an offsite population-based approach and an area-based approach, were then employed to calculate the scaling factors which enabled cost projections as a function of EPZ size. Site-specific factors that influenced source data costs, such as the effects of supplemental funding to external state and local agencies for offsite response organization activities, were incorporated into the analysis to the extent those factors could be representatively apportioned.« less
NASA Astrophysics Data System (ADS)
Song, Young Joo; Woo, Jong Hun; Shin, Jong Gye
2009-12-01
Today, many middle-sized shipbuilding companies in Korea are experiencing strong competition from shipbuilding companies in other nations. This competition is particularly affecting small- and middle-sized shipyards, rather than the major shipyards that have their own support systems and development capabilities. The acquisition of techniques that would enable maximization of production efficiency and minimization of the gap between planning and execution would increase the competitiveness of small- and middle-sized Korean shipyards. In this paper, research on a simulation-based support system for ship production management, which can be applied to the shipbuilding processes of middle-sized shipbuilding companies, is presented. The simulation research includes layout optimization, load balancing, work stage operation planning, block logistics, and integrated material management. Each item is integrated into a network system with a value chain that includes all shipbuilding processes.
Theofilou, Paraskevi; Togas, Constantinos; Vasilopoulou, Chrysoula; Minos, Christos; Zyga, Sofia; Tzitzikos, Giorgos
2015-04-13
There is clear evidence of a link between dialysis adequacy (as measured by urea kinetic modeling or urea reduction ratio) and such important clinical outcomes as morbidity and mortality. Evidence regarding the relationship between dialysis adequacy and quality of life (QOL) outcomes as well as adherence is less clear. The present paper is a study protocol which is planning to answer the following research question: what is the impact of dialysis adequacy on QOL and adherence in a sample of hemodialysis patients? The final sample size will be around 100 patients undergoing hemodialysis. Each subject's QOL and adherence will be measured using the following instruments: i) the Missoula-VITAS quality of life index 25; ii) the multidimensional scale of perceived social support and iii) the simplified medication adherence questionnaire. Dialysis adequacy is expected to be related to QOL and adherence scores.
Lead isotope data bank; 2,624 samples and analyses cited
Doe, Bruce R.
1976-01-01
The Lead Isotope Data Bank (LIDB) was initiated to facilitate plotting data. Therefore, the Bank reflects data most often used in plotting rather than comprises a comprehensive tabulation of lead isotope data. Up until now, plotting was done using card decks processed by computer with tapes plotted by a Gerber plotter and more recently a CRT using a batch mode. Lack of a uniform format for sample identification was not a great impediment. With increase in the size of the bank, hand sorting is becoming prohibitive and ·plans are underway to put the bank into a uniform format on DISK with a card backup so that it may be accessed by use of IRIS on the DECK 10 computer at the U.S.G.S. facility in Denver. Plots will be constructed on a CRT. Entry of the bank into the IRIS accessing program is scheduled for completion in FY 1976
Steyer, G.D.; Sasser, C.E.; Visser, J.M.; Swenson, E.M.; Nyman, J.A.; Raynie, R.C.
2003-01-01
Wetland restoration efforts conducted in Louisiana under the Coastal Wetlands Planning, Protection and Restoration Act require monitoring the effectiveness of individual projects as well as monitoring the cumulative effects of all projects in restoring, creating, enhancing, and protecting the coastal landscape. The effectiveness of the traditional paired-reference monitoring approach in Louisiana has been limited because of difficulty in finding comparable reference sites. A multiple reference approach is proposed that uses aspects of hydrogeomorphic functional assessments and probabilistic sampling. This approach includes a suite of sites that encompass the range of ecological condition for each stratum, with projects placed on a continuum of conditions found for that stratum. Trajectories in reference sites through time are then compared with project trajectories through time. Plant community zonation complicated selection of indicators, strata, and sample size. The approach proposed could serve as a model for evaluating wetland ecosystems.
Steyer, Gregory D; Sasser, Charles E; Visser, Jenneke M; Swenson, Erick M; Nyman, John A; Raynie, Richard C
2003-01-01
Wetland restoration efforts conducted in Louisiana under the Coastal Wetlands Planning, Protection and Restoration Act require monitoring the effectiveness of individual projects as well as monitoring the cumulative effects of all projects in restoring, creating, enhancing, and protecting the coastal landscape. The effectiveness of the traditional paired-reference monitoring approach in Louisiana has been limited because of difficulty in finding comparable reference sites. A multiple reference approach is proposed that uses aspects of hydrogeomorphic functional assessments and probabilistic sampling. This approach includes a suite of sites that encompass the range of ecological condition for each stratum, with projects placed on a continuum of conditions found for that stratum. Trajectories in reference sites through time are then compared with project trajectories through time. Plant community zonation complicated selection of indicators, strata, and sample size. The approach proposed could serve as a model for evaluating wetland ecosystems.
North Carolina's Basic Education Plan and the Arts.
ERIC Educational Resources Information Center
Page, Frances M.; Dyke, Lane
1990-01-01
Discusses the changes made by the North Carolina General Assembly in the state school system with the Basic Education Plan (BEP). The plan focuses on curriculum, class size, spiral curriculum, and competency examinations. Reports that the BEP views the arts as basic to education. (GG)
7 CFR 51.1406 - Sample for grade or size determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., AND STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Sample for Grade Or Size Determination § 51.1406 Sample for grade or size determination. Each sample shall consist of 100 pecans. The...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farnham, Irene
2016-12-01
This report presents the analytical data for the 2014 fiscal year (FY) and calendar year (CY) (October 1, 2013, through December 31, 2014), and an evaluation of the data to ensure that the Sampling Plan’s objectives are met. In addition to samples collected and analyzed for the Sampling Plan, some NNSS wells are monitored by NNSA/NFO to demonstrate compliance with State-issued water discharge permits; with protection of groundwater from ongoing radiological waste disposal activities (compliance wells); and to demonstrate that the onsite drinking water supply is below SDWA maximum contaminant levels (MCLs) (public water system [PWS] wells). While not allmore » sampled locations are required by the Sampling Plan, these samples are relevant to its objectives and are therefore presented herein for completeness purposes. Special investigations that took place in 2014 that are relevant to the Sampling Plan are also presented. This is the first annual report released to support Sampling Plan implementation.« less
SAMPLING OF CONTAMINATED SITES
A critical aspect of characterization of the amount and species of contamination of a hazardous waste site is the sampling plan developed for that site. f the sampling plan is not thoroughly conceptualized before sampling takes place, then certain critical aspects of the limits o...
[Family planning in Benin: what future?].
Danlodji, R
1993-01-01
In Benin, family planning began in the late 1960s, but its activities were not clear or specific. It made small strides in private clinics until a family planning association was formed, later named the Beninese Association to Promote the Family (ABPF). Family planning promoters maintain that reduction in births per couple is necessary for economic development in Africa. Family planning detractors think that a child is a fruit of God and that family planning impedes his or her coming to the world. ABPF has worked much to promote Beninese families, but it is still not well known. Despite the associations efforts and those of many other institutions, contraceptive prevalence is low and the abortion rate and its risks remain high, namely, death, infertility, and contraction of various diseases. Thus, it is important to rethink family planning strategies. All intervening parties should coordinate activities to better reach urban and rural populations. Many rural inhabitants go to cities to escape poverty and the misery evoked by their family size and meager earnings only to find unemployment in the cities. In order for family planning to have an effect in Benin, it is important to begin working with youth. Any family planning strategy must consider their aspirations. The youth are inclined to be more receptive to family planning than the adults who do not want to give up old habits. Yet, contraceptive use in 14-20 year olds is low even though sexual activity is high. Since the youth want a small family size, a small plot of land, a care, and a successful life, it is important to give priority to jobs. We need to educate the youth so they can freely decide their family size. Socioeconomic reasons are the primary factor pushing people to accept family planning, followed by health reasons. Research is needed to learn why contraceptive prevalence is still low.
Lee, Paul H; Tse, Andy C Y
2017-05-01
There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Optimal planning and design of a renewable energy based supply system for microgrids
Hafez, Omar; Bhattacharya, Kankar
2012-03-03
This paper presents a technique for optimal planning and design of hybrid renewable energy systems for microgrid applications. The Distributed Energy Resources Customer Adoption Model (DER-CAM) is used to determine the optimal size and type of distributed energy resources (DERs) and their operating schedules for a sample utility distribution system. Using the DER-CAM results, an evaluation is performed to evaluate the electrical performance of the distribution circuit if the DERs selected by the DER-CAM optimization analyses are incorporated. Results of analyses regarding the economic benefits of utilizing the optimal locations identified for the selected DER within the system are alsomore » presented. The actual Brookhaven National Laboratory (BNL) campus electrical network is used as an example to show the effectiveness of this approach. The results show that these technical and economic analyses of hybrid renewable energy systems are essential for the efficient utilization of renewable energy resources for microgird applications.« less
Simulation of Thematic Mapper performance as a function of sensor scanning parameters
NASA Technical Reports Server (NTRS)
Johnson, R. H.; Shah, N. J.; Schmidt, N. F.
1975-01-01
The investigation and results of the Thematic Mapper Instrument Performance Study are described. The Thematic Mapper is the advanced multispectral scanner initially planned for the Earth Observation Satellite and now planned for LANDSAT D. The use of existing digital airborne scanner data obtained with the Modular Multispectral Scanner (M2S) at Bendix provided an opportunity to simulate the effects of variation of design parameters of the Thematic Mapper. Analysis and processing of this data on the Bendix Multispectral Data Analysis System were used to empirically determine categorization performance on data generated with variations of the sampling period and scan overlap parameters of the Thematic Mapper. The Bendix M2S data, with a 2.5 milliradian instantaneous field of view and a spatial resolution (pixel size) of 10-m from 13,000 ft altitude, allowed a direct simulation of Thematic Mapper data with a 30-m resolution. The flight data chosen were obtained on 30 June 1973 over agricultural test sites in Indiana.
Distribution of the two-sample t-test statistic following blinded sample size re-estimation.
Lu, Kaifeng
2016-05-01
We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Facilitators and obstacles in pre-hospital medical response to earthquakes: a qualitative study.
Djalali, Ahmadreza; Khankeh, Hamidreza; Öhlén, Gunnar; Castrén, Maaret; Kurland, Lisa
2011-05-16
Earthquakes are renowned as being amongst the most dangerous and destructive types of natural disasters. Iran, a developing country in Asia, is prone to earthquakes and is ranked as one of the most vulnerable countries in the world in this respect. The medical response in disasters is accompanied by managerial, logistic, technical, and medical challenges being also the case in the Bam earthquake in Iran. Our objective was to explore the medical response to the Bam earthquake with specific emphasis on pre-hospital medical management during the first days. The study was performed in 2008; an interview based qualitative study using content analysis. We conducted nineteen interviews with experts and managers responsible for responding to the Bam earthquake, including pre-hospital emergency medical services, the Red Crescent, and Universities of Medical Sciences. The selection of participants was determined by using a purposeful sampling method. Sample size was given by data saturation. The pre-hospital medical service was divided into three categories; triage, emergency medical care and transportation, each category in turn was identified into facilitators and obstacles. The obstacles identified were absence of a structured disaster plan, absence of standardized medical teams, and shortage of resources. The army and skilled medical volunteers were identified as facilitators. The most compelling, and at the same time amenable obstacle, was the lack of a disaster management plan. It was evident that implementing a comprehensive plan would not only save lives but decrease suffering and enable an effective praxis of the available resources at pre-hospital and hospital levels.
Facilitators and obstacles in pre-hospital medical response to earthquakes: a qualitative study
2011-01-01
Background Earthquakes are renowned as being amongst the most dangerous and destructive types of natural disasters. Iran, a developing country in Asia, is prone to earthquakes and is ranked as one of the most vulnerable countries in the world in this respect. The medical response in disasters is accompanied by managerial, logistic, technical, and medical challenges being also the case in the Bam earthquake in Iran. Our objective was to explore the medical response to the Bam earthquake with specific emphasis on pre-hospital medical management during the first days. Methods The study was performed in 2008; an interview based qualitative study using content analysis. We conducted nineteen interviews with experts and managers responsible for responding to the Bam earthquake, including pre-hospital emergency medical services, the Red Crescent, and Universities of Medical Sciences. The selection of participants was determined by using a purposeful sampling method. Sample size was given by data saturation. Results The pre-hospital medical service was divided into three categories; triage, emergency medical care and transportation, each category in turn was identified into facilitators and obstacles. The obstacles identified were absence of a structured disaster plan, absence of standardized medical teams, and shortage of resources. The army and skilled medical volunteers were identified as facilitators. Conclusions The most compelling, and at the same time amenable obstacle, was the lack of a disaster management plan. It was evident that implementing a comprehensive plan would not only save lives but decrease suffering and enable an effective praxis of the available resources at pre-hospital and hospital levels. PMID:21575233
Extensive clonal spread and extreme longevity in saw palmetto, a foundation clonal plant.
Takahashi, Mizuki K; Horner, Liana M; Kubota, Toshiro; Keller, Nathan A; Abrahamson, Warren G
2011-09-01
The lack of effective tools has hampered out ability to assess the size, growth and ages of clonal plants. With Serenoa repens (saw palmetto) as a model, we introduce a novel analytical framework that integrates DNA fingerprinting and mathematical modelling to simulate growth and estimate ages of clonal plants. We also demonstrate the application of such life-history information of clonal plants to provide insight into management plans. Serenoa is an ecologically important foundation species in many Southeastern United States ecosystems; yet, many land managers consider Serenoa a troublesome invasive plant. Accordingly, management plans have been developed to reduce or eliminate Serenoa with little understanding of its life history. Using Amplified Fragment Length Polymorphisms, we genotyped 263 Serenoa and 134 Sabal etonia (a sympatric non-clonal palmetto) samples collected from a 20 × 20 m study plot in Florida scrub. Sabal samples were used to assign small field-unidentifiable palmettos to Serenoa or Sabal and also as a negative control for clone detection. We then mathematically modelled clonal networks to estimate genet ages. Our results suggest that Serenoa predominantly propagate via vegetative sprouts and 10,000-year-old genets may be common, while showing no evidence of clone formation by Sabal. The results of this and our previous studies suggest that: (i) Serenoa has been part of scrub associations for thousands of years, (ii) Serenoa invasion are unlikely and (ii) once Serenoa is eliminated from local communities, its restoration will be difficult. Reevaluation of the current management tools and plans is an urgent task. © 2011 Blackwell Publishing Ltd.
Experimental validation of the van Herk margin formula for lung radiation therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ecclestone, Gillian; Heath, Emily; Bissonnette, Jean-Pierre
2013-11-15
Purpose: To validate the van Herk margin formula for lung radiation therapy using realistic dose calculation algorithms and respiratory motion modeling. The robustness of the margin formula against variations in lesion size, peak-to-peak motion amplitude, tissue density, treatment technique, and plan conformity was assessed, along with the margin formula assumption of a homogeneous dose distribution with perfect plan conformity.Methods: 3DCRT and IMRT lung treatment plans were generated within the ORBIT treatment planning platform (RaySearch Laboratories, Sweden) on 4DCT datasets of virtual phantoms. Random and systematic respiratory motion induced errors were simulated using deformable registration and dose accumulation tools available withinmore » ORBIT for simulated cases of varying lesion sizes, peak-to-peak motion amplitudes, tissue densities, and plan conformities. A detailed comparison between the margin formula dose profile model, the planned dose profiles, and penumbra widths was also conducted to test the assumptions of the margin formula. Finally, a correction to account for imperfect plan conformity was tested as well as a novel application of the margin formula that accounts for the patient-specific motion trajectory.Results: The van Herk margin formula ensured full clinical target volume coverage for all 3DCRT and IMRT plans of all conformities with the exception of small lesions in soft tissue. No dosimetric trends with respect to plan technique or lesion size were observed for the systematic and random error simulations. However, accumulated plans showed that plan conformity decreased with increasing tumor motion amplitude. When comparing dose profiles assumed in the margin formula model to the treatment plans, discrepancies in the low dose regions were observed for the random and systematic error simulations. However, the margin formula respected, in all experiments, the 95% dose coverage required for planning target volume (PTV) margin derivation, as defined by the ICRU; thus, suitable PTV margins were estimated. The penumbra widths calculated in lung tissue for each plan were found to be very similar to the 6.4 mm value assumed by the margin formula model. The plan conformity correction yielded inconsistent results which were largely affected by image and dose grid resolution while the trajectory modified PTV plans yielded a dosimetric benefit over the standard internal target volumes approach with up to a 5% decrease in the V20 value.Conclusions: The margin formula showed to be robust against variations in tumor size and motion, treatment technique, plan conformity, as well as low tissue density. This was validated by maintaining coverage of all of the derived PTVs by 95% dose level, as required by the formal definition of the PTV. However, the assumption of perfect plan conformity in the margin formula derivation yields conservative margin estimation. Future modifications to the margin formula will require a correction for plan conformity. Plan conformity can also be improved by using the proposed trajectory modified PTV planning approach. This proves especially beneficial for tumors with a large anterior–posterior component of respiratory motion.« less
Sample Size Determination for Regression Models Using Monte Carlo Methods in R
ERIC Educational Resources Information Center
Beaujean, A. Alexander
2014-01-01
A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…
Development of a Searchable Database of Cryoablation Simulations for Use in Treatment Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boas, F. Edward, E-mail: boasf@mskcc.org; Srimathveeravalli, Govindarajan, E-mail: srimaths@mskcc.org; Durack, Jeremy C., E-mail: durackj@mskcc.org
PurposeTo create and validate a planning tool for multiple-probe cryoablation, using simulations of ice ball size and shape for various ablation probe configurations, ablation times, and types of tissue ablated.Materials and MethodsIce ball size and shape was simulated using the Pennes bioheat equation. Five thousand six hundred and seventy different cryoablation procedures were simulated, using 1–6 cryoablation probes and 1–2 cm spacing between probes. The resulting ice ball was measured along three perpendicular axes and recorded in a database. Simulated ice ball sizes were compared to gel experiments (26 measurements) and clinical cryoablation cases (42 measurements). The clinical cryoablation measurements weremore » obtained from a HIPAA-compliant retrospective review of kidney and liver cryoablation procedures between January 2015 and February 2016. Finally, we created a web-based cryoablation planning tool, which uses the cryoablation simulation database to look up the probe spacing and ablation time that produces the desired ice ball shape and dimensions.ResultsAverage absolute error between the simulated and experimentally measured ice balls was 1 mm in gel experiments and 4 mm in clinical cryoablation cases. The simulations accurately predicted the degree of synergy in multiple-probe ablations. The cryoablation simulation database covers a wide range of ice ball sizes and shapes up to 9.8 cm.ConclusionCryoablation simulations accurately predict the ice ball size in multiple-probe ablations. The cryoablation database can be used to plan ablation procedures: given the desired ice ball size and shape, it will find the number and type of probes, probe configuration and spacing, and ablation time required.« less
Automatic CT simulation optimization for radiation therapy: A general strategy.
Li, Hua; Yu, Lifeng; Anastasio, Mark A; Chen, Hsin-Chen; Tan, Jun; Gay, Hiram; Michalski, Jeff M; Low, Daniel A; Mutic, Sasa
2014-03-01
In radiation therapy, x-ray computed tomography (CT) simulation protocol specifications should be driven by the treatment planning requirements in lieu of duplicating diagnostic CT screening protocols. The purpose of this study was to develop a general strategy that allows for automatically, prospectively, and objectively determining the optimal patient-specific CT simulation protocols based on radiation-therapy goals, namely, maintenance of contouring quality and integrity while minimizing patient CT simulation dose. The authors proposed a general prediction strategy that provides automatic optimal CT simulation protocol selection as a function of patient size and treatment planning task. The optimal protocol is the one that delivers the minimum dose required to provide a CT simulation scan that yields accurate contours. Accurate treatment plans depend on accurate contours in order to conform the dose to actual tumor and normal organ positions. An image quality index, defined to characterize how simulation scan quality affects contour delineation, was developed and used to benchmark the contouring accuracy and treatment plan quality within the predication strategy. A clinical workflow was developed to select the optimal CT simulation protocols incorporating patient size, target delineation, and radiation dose efficiency. An experimental study using an anthropomorphic pelvis phantom with added-bolus layers was used to demonstrate how the proposed prediction strategy could be implemented and how the optimal CT simulation protocols could be selected for prostate cancer patients based on patient size and treatment planning task. Clinical IMRT prostate treatment plans for seven CT scans with varied image quality indices were separately optimized and compared to verify the trace of target and organ dosimetry coverage. Based on the phantom study, the optimal image quality index for accurate manual prostate contouring was 4.4. The optimal tube potentials for patient sizes of 38, 43, 48, 53, and 58 cm were 120, 140, 140, 140, and 140 kVp, respectively, and the corresponding minimum CTDIvol for achieving the optimal image quality index 4.4 were 9.8, 32.2, 100.9, 241.4, and 274.1 mGy, respectively. For patients with lateral sizes of 43-58 cm, 120-kVp scan protocols yielded up to 165% greater radiation dose relative to 140-kVp protocols, and 140-kVp protocols always yielded a greater image quality index compared to the same dose-level 120-kVp protocols. The trace of target and organ dosimetry coverage and the γ passing rates of seven IMRT dose distribution pairs indicated the feasibility of the proposed image quality index for the predication strategy. A general strategy to predict the optimal CT simulation protocols in a flexible and quantitative way was developed that takes into account patient size, treatment planning task, and radiation dose. The experimental study indicated that the optimal CT simulation protocol and the corresponding radiation dose varied significantly for different patient sizes, contouring accuracy, and radiation treatment planning tasks.
NASA Technical Reports Server (NTRS)
Wu, Gilbert; Santiago, Confesor
2017-01-01
RTCA Special Committee (SC) 228 has initiated a second phase for the development of minimum operational performance standards (MOPS) for UAS detect and avoid (DAA) systems. Technologies to enable UAS with less available Size, Weight, and Power (SWaP) will be considered. RTCA SC-228 has established sub-working groups and one of the sub-working groups is focused on aligning modeling and simulations activities across all participating committee members. This briefing will describe NASAs modeling and simulation plans for the development of performance standards for low cost, size, weight, and power (C-SWaP) surveillance systems that detect and track non-cooperative aircraft. The briefing will also describe the simulation platform NASA intends to use to support end-to-end verification and validation for these DAA systems. Lastly, the briefing will highlight the experiment plan for our first simulation study, and provide a high-level description of our future flight test plans. This briefing does not contain any results or data.
Brito, José C.; Martínez-Freiría, Fernando; Sierra, Pablo; Sillero, Neftalí; Tarroso, Pedro
2011-01-01
Background Relict populations of Crocodylus niloticus persist in Chad, Egypt and Mauritania. Although crocodiles were widespread throughout the Sahara until the early 20th century, increased aridity combined with human persecution led to local extinction. Knowledge on distribution, occupied habitats, population size and prey availability is scarce in most populations. This study evaluates the status of Saharan crocodiles and provides new data for Mauritania to assist conservation planning. Methodology/Principal Findings A series of surveys in Mauritania detected crocodile presence in 78 localities dispersed across 10 river basins and most tended to be isolated within river basins. Permanent gueltas and seasonal tâmoûrts were the most common occupied habitats. Crocodile encounters ranged from one to more than 20 individuals, but in most localities less than five crocodiles were observed. Larger numbers were observed after the rainy season and during night sampling. Crocodiles were found dead in between water points along dry river-beds suggesting the occurrence of dispersal. Conclusion/Significance Research priorities in Chad and Egypt should focus on quantifying population size and pressures exerted on habitats. The present study increased in by 35% the number of known crocodile localities in Mauritania. Gueltas are crucial for the persistence of mountain populations. Oscillations in water availability throughout the year and the small dimensions of gueltas affect biological traits, including activity and body size. Studies are needed to understand adaptation traits of desert populations. Molecular analyses are needed to quantify genetic variability, population sub-structuring and effective population size, and detect the occurrence of gene flow. Monitoring is needed to detect demographical and genetical trends in completely isolated populations. Crocodiles are apparently vulnerable during dispersal events. Awareness campaigns focusing on the vulnerability and relict value of crocodiles should be implemented. Classification of Mauritanian mountains as protected areas should be prioritised. PMID:21364897
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willett, A; Gilmore, M; Rowbottom, C
2016-06-15
Purpose: The purpose of this work was to see if the EPID is a viable alternative to other QA devices for routine FFF QA and plan dose measurements. Methods: Sensitivity measurements were made to assess response to small changes in field size and beam steering. QA plans were created where field size was varied from baseline values (5–5.5cm, 20–20.5cm). Beam steering was adjusted by altering values in service mode (Symmetry 0–3%). Plans were measured using the Varian portal imager (aS1200 DMI panel), QA3 (Sun Nuclear), and Starcheck Maxi (PTW). FFF beam parameters as stated in Fogliata et al were calculated.more » Constancy measurements were taken using all 3 QC devices to measure a MLC defined 20×20cm field. Two clinical SABR patient plans were measured on a Varian Edge linac, using the Portal Dosimetry module in ARIA, and results compared with analysis made using Delta4 (ScandiDos). Results: The EPID and the Starcheck performed better at detecting clinically relevant changes in field size with the QA3 performing better when detecting similar changes in beam symmetry. Consistency measurements with the EPID and Starcheck were equivalent, with comparable standard deviations. Clinical plan measurements on the EPID compared well with Delta4 results at 3%/1mm. Conclusion: Our results show that for FFF QA measurements such as field size and symmetry, using the EPID is a viable alternative to other QA devices. The EPID could potentially be used for QC measurements with a focus on geometric accuracy, such as MLC positional QA, due to its high resolution compared to other QA devices (EPID 0.34mm, Starcheck 3mm, QA3 5mm). Good agreement between Delta4 and portal dosimetry also indicated the EPID may be a suitable alternative for measurement of clinical plans.« less
SU-E-T-538: Evaluation of IMRT Dose Calculation Based on Pencil-Beam and AAA Algorithms.
Yuan, Y; Duan, J; Popple, R; Brezovich, I
2012-06-01
To evaluate the accuracy of dose calculation for intensity modulated radiation therapy (IMRT) based on Pencil Beam (PB) and Analytical Anisotropic Algorithm (AAA) computation algorithms. IMRT plans of twelve patients with different treatment sites, including head/neck, lung and pelvis, were investigated. For each patient, dose calculation with PB and AAA algorithms using dose grid sizes of 0.5 mm, 0.25 mm, and 0.125 mm, were compared with composite-beam ion chamber and film measurements in patient specific QA. Discrepancies between the calculation and the measurement were evaluated by percentage error for ion chamber dose and γ〉l failure rate in gamma analysis (3%/3mm) for film dosimetry. For 9 patients, ion chamber dose calculated with AAA-algorithms is closer to ion chamber measurement than that calculated with PB algorithm with grid size of 2.5 mm, though all calculated ion chamber doses are within 3% of the measurements. For head/neck patients and other patients with large treatment volumes, γ〉l failure rate is significantly reduced (within 5%) with AAA-based treatment planning compared to generally more than 10% with PB-based treatment planning (grid size=2.5 mm). For lung and brain cancer patients with medium and small treatment volumes, γ〉l failure rates are typically within 5% for both AAA and PB-based treatment planning (grid size=2.5 mm). For both PB and AAA-based treatment planning, improvements of dose calculation accuracy with finer dose grids were observed in film dosimetry of 11 patients and in ion chamber measurements for 3 patients. AAA-based treatment planning provides more accurate dose calculation for head/neck patients and other patients with large treatment volumes. Compared with film dosimetry, a γ〉l failure rate within 5% can be achieved for AAA-based treatment planning. © 2012 American Association of Physicists in Medicine.
Sample size determination in group-sequential clinical trials with two co-primary endpoints
Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi
2014-01-01
We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799
40 CFR 230.94 - Planning and documentation.
Code of Federal Regulations, 2010 CFR
2010-07-01
...-sections), watershed size, design discharge, and riparian area plantings. (8) Maintenance plan. A... sustainability of the resource, including long-term financing mechanisms and the party responsible for long-term...
Menu Plans: Maximum Nutrition for Minimum Cost.
ERIC Educational Resources Information Center
Texas Child Care, 1995
1995-01-01
Suggests that menu planning is the key to getting maximum nutrition in day care meals and snacks for minimum cost. Explores United States Department of Agriculture food pyramid guidelines for children and tips for planning menus and grocery shopping. Includes suggested meal patterns and portion sizes. (HTH)
Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.
Luh, Wei-Ming; Guo, Jiin-Huarng
2007-05-01
Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.
Pan, Wenjing; Peña, Jorge
2017-10-01
This study examined how exposure to pictures of women with different body sizes (thin, obese), physical attractiveness levels (attractive, unattractive), along with exposure to weight-related messages (pro-anorexia, anti-anorexia) embedded in a fashion website affected female participants' planned behavior toward weight loss. Participants exposed to attractive model pictures showed higher intentions, attitudes, and subjective norms to lose weight compared with unattractive models. Additionally, participants exposed to thin and attractive model pictures indicated the highest attitudes and self-efficacy to lose weight, whereas those exposed to thin and unattractive model pictures indicated the lowest. Furthermore, weight-related messages moderated the effect of model appearance (body size and attractiveness) on controllability of weight-loss activities. However, website pictures' body size differences had no main effects on planned behavior toward weight loss. These effects are discussed in the light of social comparison mechanisms.
Stakeholder-focused evaluation of an online course for health care providers.
Dunet, Diane O; Reyes, Michele
2006-01-01
Different people who have a stake or interest in a training course (stakeholders) may have markedly different definitions of what constitutes "training success" and how they will use evaluation results. Stakeholders at multiple levels within and outside of the organization guided the development of an evaluation plan for a Web-based training course on hemochromatosis. Stakeholder interests and values were reflected in the type, level, and rigor of evaluation methods selected. Our mixed-method evaluation design emphasized small sample sizes and repeated measures. Limited resources for evaluation were leveraged by focusing on the data needs of key stakeholders, understanding how they wanted to use evaluation results, and collecting data needed for stakeholder decision making. Regular feedback to key stakeholders provided opportunities for updating the course evaluation plan to meet emerging needs for new or different information. Early and repeated involvement of stakeholders in the evaluation process also helped build support for the final product. Involving patient advocacy groups, managers, and representative course participants improved the course and enhanced product dissemination. For training courses, evaluation planning is an opportunity to tailor methods and data collection to meet the information needs of particular stakeholders. Rigorous evaluation research of every training course may be infeasible or unwarranted; however, course evaluations can be improved by good planning. A stakeholder-focused approach can build a picture of the results and impact of training while fostering the practical use of evaluation data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jomekian, A.; Faculty of Chemical Engineering, Iran University of Science and Technology; Behbahani, R.M., E-mail: behbahani@put.ac.ir
Ultra porous ZIF-8 particles synthesized using PEO/PA6 based poly(ether-block-amide) (Pebax 1657) as structure directing agent. Structural properties of ZIF-8 samples prepared under different synthesis parameters were investigated by laser particle size analysis, XRD, N{sub 2} adsorption analysis, BJH and BET tests. The overall results showed that: (1) The mean pore size of all ZIF-8 samples increased remarkably (from 0.34 nm to 1.1–2.5 nm) compared to conventionally synthesized ZIF-8 samples. (2) Exceptional BET surface area of 1869 m{sup 2}/g was obtained for a ZIF-8 sample with mean pore size of 2.5 nm. (3) Applying high concentrations of Pebax 1657 to themore » synthesis solution lead to higher surface area, larger pore size and smaller particle size for ZIF-8 samples. (4) Both, Increase in temperature and decrease in molar ratio of MeIM/Zn{sup 2+} had increasing effect on ZIF-8 particle size, pore size, pore volume, crystallinity and BET surface area of all investigated samples. - Highlights: • The pore size of ZIF-8 samples synthesized with Pebax 1657 increased remarkably. • The BET surface area of 1869 m{sup 2}/gr obtained for a ZIF-8 synthesized sample with Pebax. • Increase in temperature had increasing effect on textural properties of ZIF-8 samples. • Decrease in MeIM/Zn{sup 2+} had increasing effect on textural properties of ZIF-8 samples.« less
NASA Astrophysics Data System (ADS)
Selva, Jacopo; Sandri, Laura; Costa, Antonio; Tonini, Roberto; Folch, Arnau; Macedonio, Giovanni
2014-05-01
The intrinsic uncertainty and variability associated to the size of next eruption strongly affects short to long-term tephra hazard assessment. Often, emergency plans are established accounting for the effects of one or a few representative scenarios (meant as a specific combination of eruptive size and vent position), selected with subjective criteria. On the other hand, probabilistic hazard assessments (PHA) consistently explore the natural variability of such scenarios. PHA for tephra dispersal needs the definition of eruptive scenarios (usually by grouping possible eruption sizes and vent positions in classes) with associated probabilities, a meteorological dataset covering a representative time period, and a tephra dispersal model. PHA results from combining simulations considering different volcanological and meteorological conditions through a weight given by their specific probability of occurrence. However, volcanological parameters, such as erupted mass, eruption column height and duration, bulk granulometry, fraction of aggregates, typically encompass a wide range of values. Because of such a variability, single representative scenarios or size classes cannot be adequately defined using single values for the volcanological inputs. Here we propose a method that accounts for this within-size-class variability in the framework of Event Trees. The variability of each parameter is modeled with specific Probability Density Functions, and meteorological and volcanological inputs are chosen by using a stratified sampling method. This procedure allows avoiding the bias introduced by selecting single representative scenarios and thus neglecting most of the intrinsic eruptive variability. When considering within-size-class variability, attention must be paid to appropriately weight events falling within the same size class. While a uniform weight to all the events belonging to a size class is the most straightforward idea, this implies a strong dependence on the thresholds dividing classes: under this choice, the largest event of a size class has a much larger weight than the smallest event of the subsequent size class. In order to overcome this problem, in this study, we propose an innovative solution able to smoothly link the weight variability within each size class to the variability among the size classes through a common power law, and, simultaneously, respect the probability of different size classes conditional to the occurrence of an eruption. Embedding this procedure into the Bayesian Event Tree scheme enables for tephra fall PHA, quantified through hazard curves and maps representing readable results applicable in planning risk mitigation actions, and for the quantification of its epistemic uncertainties. As examples, we analyze long-term tephra fall PHA at Vesuvius and Campi Flegrei. We integrate two tephra dispersal models (the analytical HAZMAP and the numerical FALL3D) into BET_VH. The ECMWF reanalysis dataset are used for exploring different meteorological conditions. The results obtained clearly show that PHA accounting for the whole natural variability significantly differs from that based on a representative scenarios, as in volcanic hazard common practice.
ERIC Educational Resources Information Center
Sahin, Alper; Weiss, David J.
2015-01-01
This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…
Effects of Pore Distributions on Ductility of Thin-Walled High Pressure Die-Cast Magnesium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Kyoo Sil; Li, Dongsheng; Sun, Xin
2013-06-01
In this paper, a microstructure-based three-dimensional (3D) finite element modeling method is adopted to investigate the effects of porosity in thin-walled high pressure die-cast (HPDC) Magnesium alloys on their ductility. For this purpose, the cross-sections of AM60 casting samples are first examined using optical microscope and X-ray tomography to obtain the general information on the pore distribution features. The experimentally observed pore distribution features are then used to generate a series of synthetic microstructure-based 3D finite element models with different pore volume fractions and pore distribution features. Shear and ductile damage models are adopted in the finite element analyses tomore » induce the fracture by element removal, leading to the prediction of ductility. The results in this study show that the ductility monotonically decreases as the pore volume fraction increases and that the effect of ‘skin region’ on the ductility is noticeable under the condition of same local pore volume fraction in the center region of the sample and its existence can be beneficial for the improvement of ductility. The further synthetic microstructure-based 3D finite element analyses are planned to investigate the effects of pore size and pore size distribution.« less
Overcoming the winner's curse: estimating penetrance parameters from case-control data.
Zollner, Sebastian; Pritchard, Jonathan K
2007-04-01
Genomewide association studies are now a widely used approach in the search for loci that affect complex traits. After detection of significant association, estimates of penetrance and allele-frequency parameters for the associated variant indicate the importance of that variant and facilitate the planning of replication studies. However, when these estimates are based on the original data used to detect the variant, the results are affected by an ascertainment bias known as the "winner's curse." The actual genetic effect is typically smaller than its estimate. This overestimation of the genetic effect may cause replication studies to fail because the necessary sample size is underestimated. Here, we present an approach that corrects for the ascertainment bias and generates an estimate of the frequency of a variant and its penetrance parameters. The method produces a point estimate and confidence region for the parameter estimates. We study the performance of this method using simulated data sets and show that it is possible to greatly reduce the bias in the parameter estimates, even when the original association study had low power. The uncertainty of the estimate decreases with increasing sample size, independent of the power of the original test for association. Finally, we show that application of the method to case-control data can improve the design of replication studies considerably.
Shirotani, Mari; Kurokawa, Tatsuo; Chiba, Koji
2014-07-01
The number of worldwide and Asian multiregional clinical trials (MRCTs) submitted for Japanese New Drug Applications increased markedly between 2009 and 2013, with an increasing number performed for simultaneously submission in the USA, EU, and Japan. Asian studies accounted for 32% of MRCTs (14/44 studies) and had comparatively small sample sizes (<500 subjects). Moreover, the number of Japanese subjects in Asian studies was 2.1- to 13.4-fold larger than the sample size estimated using the method described in Japanese MRCT guidelines, whereas the ratio for worldwide studies was 0.05- to 4.9-fold. Before the introduction of this guidelines, bridging or domestic clinical development strategies were used as the regional development strategy in accordance with ICH E5 guidelines. The results presented herein suggest that Asian studies were conducted when the drug had already been approved in the US/EU, when phase 3 clinical trials were not be planned in the USA/EU, when there was insufficient knowledge of ethnic differences in drug efficacy and safety, or when Caucasian data could not be extrapolated to the Japanese population. New strategies with Asian studies including the Japanese population could be conducted instead of Japanese domestic development strategy. © 2014, The American College of Clinical Pharmacology.
Koneff, M.D.; Royle, J. Andrew; Forsell, D.J.; Wortham, J.S.; Boomer, G.S.; Perry, M.C.
2005-01-01
Survey design for wintering scoters (Melanitta sp.) and other sea ducks that occur in offshore waters is challenging because these species have large ranges, are subject to distributional shifts among years and within a season, and can occur in aggregations. Interest in winter sea duck population abundance surveys has grown in recent years. This interest stems from concern over the population status of some sea ducks, limitations of extant breeding waterfowl survey programs in North America and logistical challenges and costs of conducting surveys in northern breeding regions, high winter area philopatry in some species and potential conservation implications, and increasing concern over offshore development and other threats to sea duck wintering habitats. The efficiency and practicality of statistically-rigorous monitoring strategies for mobile, aggregated wintering sea duck populations have not been sufficiently investigated. This study evaluated a 2-phase adaptive stratified strip transect sampling plan to estimate wintering population size of scoters, long-tailed ducks (Clangua hyemalis), and other sea ducks and provide information on distribution. The sampling plan results in an optimal allocation of a fixed sampling effort among offshore strata in the U.S. mid-Atlantic coast region. Phase I transect selection probabilities were based on historic distribution and abundance data, while Phase 2 selection probabilities were based on observations made during Phase 1 flights. Distance sampling methods were used to estimate detection rates. Environmental variables thought to affect detection rates were recorded during the survey and post-stratification and covariate modeling were investigated to reduce the effect of heterogeneity on detection estimation. We assessed cost-precision tradeoffs under a number of fixed-cost sampling scenarios using Monte Carlo simulation. We discuss advantages and limitations of this sampling design for estimating wintering sea duck abundance and mapping distribution and suggest improvements for future surveys.
Sample size calculations for case-control studies
This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.
On the Impact Origin of Phobos and Deimos. I. Thermodynamic and Physical Aspects
NASA Astrophysics Data System (ADS)
Hyodo, Ryuki; Genda, Hidenori; Charnoz, Sébastien; Rosenblatt, Pascal
2017-08-01
Phobos and Deimos are the two small moons of Mars. Recent works have shown that they can accrete within an impact-generated disk. However, the detailed structure and initial thermodynamic properties of the disk are poorly understood. In this paper, we perform high-resolution SPH simulations of the Martian moon-forming giant impact that can also form the Borealis basin. This giant impact heats up the disk material (around ˜2000 K in temperature) with an entropy increase of ˜1500 J K-1 kg-1. Thus, the disk material should be mostly molten, though a tiny fraction of disk material (< 5 % ) would even experience vaporization. Typically, a piece of molten disk material is estimated to be meter sized owing to the fragmentation regulated by their shear velocity and surface tension during the impact process. The disk materials initially have highly eccentric orbits (e ˜ 0.6-0.9), and successive collisions between meter-sized fragments at high impact velocity (˜1-5 km s-1) can grind them down to ˜100 μm sized particles. On the other hand, a tiny amount of vaporized disk material condenses into ˜0.1 μm sized grains. Thus, the building blocks of the Martian moons are expected to be a mixture of these different sized particles from meter-sized down to ˜100 μm sized particles and ˜0.1 μm sized grains. Our simulations also suggest that the building blocks of Phobos and Deimos contain both impactor and Martian materials (at least 35%), most of which come from the Martian mantle (50-150 km in depth; at least 50%). Our results will give useful information for planning a future sample return mission to Martian moons, such as JAXA’s MMX (Martian Moons eXploration) mission.
Comet nucleus and asteroid sample return missions
NASA Technical Reports Server (NTRS)
Melton, Robert G.; Thompson, Roger C.; Starchville, Thomas F., Jr.; Adams, C.; Aldo, A.; Dobson, K.; Flotta, C.; Gagliardino, J.; Lear, M.; Mcmillan, C.
1992-01-01
During the 1991-92 academic year, the Pennsylvania State University has developed three sample return missions: one to the nucleus of comet Wild 2, one to the asteroid Eros, and one to three asteroids located in the Main Belt. The primary objective of the comet nucleus sample return mission is to rendezvous with a short period comet and acquire a 10 kg sample for return to Earth. Upon rendezvous with the comet, a tethered coring and sampler drill will contact the surface and extract a two-meter core sample from the target site. Before the spacecraft returns to Earth, a monitoring penetrator containing scientific instruments will be deployed for gathering long-term data about the comet. A single asteroid sample return mission to the asteroid 433 Eros (chosen for proximity and launch opportunities) will extract a sample from the asteroid surface for return to Earth. To limit overall mission cost, most of the mission design uses current technologies, except the sampler drill design. The multiple asteroid sample return mission could best be characterized through its use of future technology including an optical communications system, a nuclear power reactor, and a low-thrust propulsion system. A low-thrust trajectory optimization code (QuickTop 2) obtained from the NASA LeRC helped in planning the size of major subsystem components, as well as the trajectory between targets.
26 CFR 1.36B-0 - Table of contents.
Code of Federal Regulations, 2013 CFR
2013-04-01
... health plan. (d) Family and family size. (e) Household income. (1) In general. (2) Modified adjusted... credit payment. (k) Exchange. (l) Self-only coverage. (m) Family coverage. (n) Rating area. (o) Effective...) Applicable benchmark plan. (1) In general. (2) Family coverage. (3) Silver level plan not covering a taxpayer...
26 CFR 1.36B-0 - Table of contents.
Code of Federal Regulations, 2014 CFR
2014-04-01
... health plan. (d) Family and family size. (e) Household income. (1) In general. (2) Modified adjusted... credit payment. (k) Exchange. (l) Self-only coverage. (m) Family coverage. (n) Rating area. (o) Effective...) Applicable benchmark plan. (1) In general. (2) Family coverage. (3) Silver level plan not covering a taxpayer...
Practical Laboratory Planning.
ERIC Educational Resources Information Center
Ferguson, W. R.
This book is intended as a guide for people who are planning chemistry and physics research laboratories. It deals with the importance of effective communication between client and architect, the value of preliminary planning, and the role of the project officer. It also discusses the size and layout of individual laboratories, the design of…
Space Guidelines for Planning Educational Facilities. Planning for Education.
ERIC Educational Resources Information Center
Oklahoma State Dept. of Education, Oklahoma City.
In 1983 the Oklahoma Legislature adopted facility guidelines for the purpose of defining, organizing, and encouraging the planning of adequate environments for education. The guidelines contained in this booklet have been designed to allow for the requirements of all Oklahoma school districts regardless of size or educational program. The…
Steps in the open space planning process
Stephanie B. Kelly; Melissa M. Ryan
1995-01-01
This paper presents the steps involved in developing an open space plan. The steps are generic in that the methods may be applied various size communities. The intent is to provide a framework to develop an open space plan that meets Massachusetts requirements for funding of open space acquisition.
Sequential sampling: a novel method in farm animal welfare assessment.
Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J
2016-02-01
Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.
Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat
2018-03-01
To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.
Zhang, Hualin; Gopalakrishnan, Mahesh; Lee, Plato; Kang, Zhuang; Sathiaseelan, Vythialingam
2016-09-08
The purpose of this study was to evaluate the dosimetric impact of cylinder size in high-dose-rate (HDR) vaginal cuff brachytherapy (VCBT). Sample plans of HDR VCBT in a list of cylinders ranging from 2.5 to 4 cm in diameter at 0.5 cm incre-ment were created and analyzed. The doses were prescribed either at the 0.5cm depth with 5.5 Gy for 4 fractions or at the cylinder surface with 8.8 Gy for 4 frac-tions, in various treatment lengths. A 0.5 cm shell volume called PTV_Eval was contoured for each plan and served as the target volume for dosimetric evaluation. The cumulative and differential dose volume histograms (c-DVH and d-DVH), mean doses (D-mean) and the doses covering 90% (D90), 10% (D10), and 5% (D5) of PTV_Eval were calculated. In the 0.5 cm depth regimen, the DVH curves were found to have shifted toward the lower dose zone when a larger cylinder was used, but in the surface regimen the DVH curves shifted toward the higher dose zone as the cylinder size increased. The D-means of the both regimens were between 6.9 and 7.8 Gy and dependent on the cylinder size but independent of the treatment length. A 0.5 cm variation of diameter could result in a 4% change of D-mean. Average D90s were 5.7 (ranging from 5.6 to 5.8 Gy) and 6.1 Gy (from 5.7 to 6.4 Gy), respectively, for the 0.5 cm and surface regimens. Average D10 and D5 were 9.2 and 11 Gy, respectively, for the 0.5 cm depth regimen, and 8.9 and 9.7 Gy, respectively, for the surface regimen. D-mean, D90, D10, and D5 for other prescription doses could be calculated from the lookup tables of this study. Results indicated that the cylinder size has moderate dosimetric impact, and that both regimens are comparable in dosimetric quality. © 2016 The Authors.
Aronoff, Justin M; Yoon, Yang-soo; Soli, Sigfrid D
2010-06-01
Stratified sampling plans can increase the accuracy and facilitate the interpretation of a dataset characterizing a large population. However, such sampling plans have found minimal use in hearing aid (HA) research, in part because of a paucity of quantitative data on the characteristics of HA users. The goal of this study was to devise a quantitatively derived stratified sampling plan for HA research, so that such studies will be more representative and generalizable, and the results obtained using this method are more easily reinterpreted as the population changes. Pure-tone average (PTA) and age information were collected for 84,200 HAs acquired in 2006 and 2007. The distribution of PTA and age was quantified for each HA type and for a composite of all HA users. Based on their respective distributions, PTA and age were each divided into three groups, the combination of which defined the stratification plan. The most populous PTA and age group was also subdivided, allowing greater homogeneity within strata. Finally, the percentage of users in each stratum was calculated. This article provides a stratified sampling plan for HA research, based on a quantitative analysis of the distribution of PTA and age for HA users. Adopting such a sampling plan will make HA research results more representative and generalizable. In addition, data acquired using such plans can be reinterpreted as the HA population changes.
Effects of spot parameters in pencil beam scanning treatment planning.
Kraan, Aafke Christine; Depauw, Nicolas; Clasie, Ben; Giunta, Marina; Madden, Tom; Kooy, Hanne M
2018-01-01
Spot size σ (in air at isocenter), interspot spacing d, and spot charge q influence dose delivery efficiency and plan quality in Intensity Modulated Proton Therapy (IMPT) treatment planning. The choice and range of parameters varies among different manufacturers. The goal of this work is to demonstrate the influence of the spot parameters on dose quality and delivery in IMPT treatment plans, to show their interdependence, and to make practitioners aware of the spot parameter values for a certain facility. Our study could help as a guideline to make the trade-off between treatment quality and time in existing PBS centers and in future systems. We created plans for seven patients and a phantom, with different tumor sites and volumes, and compared the effect of small-, medium-, and large-spot widths (σ = 2.5, 5, and 10 mm) and interspot distances (1σ, 1.5σ, and 1.75σ) on dose, spot charge, and treatment time. Moreover, we quantified how postplanning charge threshold cuts affect plan quality and the total number of spots to deliver, for different spot widths and interspot distances. We show the effect of a minimum charge (or MU) cutoff value for a given proton delivery system. Spot size had a strong influence on dose: larger spots resulted in more protons delivered outside the target region. We observed dose differences of 2-13 Gy (RBE) between 2.5 mm and 10 mm spots, where the amount of extra dose was due to dose penumbra around the target region. Interspot distance had little influence on dose quality for our patient group. Both parameters strongly influence spot charge in the plans and thus the possible impact of postplanning charge threshold cuts. If such charge thresholds are not included in the treatment planning system (TPS), it is important that the practitioner validates that a given combination of lower charge threshold, interspot spacing, and spot size does not result in a plan degradation. Low average spot charge occurs for small spots, small interspot distances, many beam directions, and low fractional dose values. The choice of spot parameters values is a trade-off between accelerator and beam line design, plan quality, and treatment efficiency. We recommend the use of small spot sizes for better organ-at-risk sparing and lateral interspot distances of 1.5σ to avoid long treatment times. We note that plan quality is influenced by the charge cutoff. Our results show that the charge cutoff can be sufficiently large (i.e., 10 6 protons) to accommodate limitations on beam delivery systems. It is, therefore, not necessary per se to include the charge cutoff in the treatment planning optimization such that Pareto navigation (e.g., as practiced at our institution) is not excluded and optimal plans can be obtained without, perhaps, a bias from the charge cutoff. We recommend that the impact of a minimum charge cut impact is carefully verified for the spot sizes and spot distances applied or that it is accommodated in the TPS. © 2017 American Association of Physicists in Medicine.
ERIC Educational Resources Information Center
California State Dept. of Education, Sacramento. Office of Curriculum Services.
The natural science curriculum guide for gifted primary students includes a sample teaching-learning plan for an ecology unit and eight sample lesson plans. Chapter One provides an overview of the unit, a review of behavioral objectives, and a list of concepts and generalizations. The second chapter cites a teaching-learning plan dealing with such…
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 19 2014-07-01 2014-07-01 false Sampling Plans for Selective Enforcement Auditing of Heavy-Duty Engines and Light-Duty Trucks X Appendix X to Part 86 Protection of... AND IN-USE HIGHWAY VEHICLES AND ENGINES Pt. 86, App. X Appendix X to Part 86—Sampling Plans for...
12 CFR 563e.27 - Strategic plan.
Code of Federal Regulations, 2011 CFR
2011-01-01
... area covered by the plan, particularly the needs of low- and moderate-income geographies and low- and... different geographies, businesses and farms of different sizes, and individuals of different income levels...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tyrrell, Evan; Denny, Angelita
Fifty-two groundwater samples and one surface water sample were collected at the Monument Valley, Arizona, Processing Site to monitor groundwater contaminants for evaluating the effectiveness of the proposed compliance strategy as specified in the 1999 Final Site Observational Work Plan for the UMTRA Project Site at Monument Valley, Arizona. Sampling and analyses were conducted as specified in the Sampling and Analysis Plan for U.S. Department of Energy Office of Legacy Management Sites (LMS/PRO/S04351, continually updated, http://energy.gov/lm/downloads/sampling-and-analysis-plan-us-department- energy-office-legacy-management-sites). Samples were collected for metals, anions, nitrate + nitrite as N, and ammonia as N analyses at all locations.
Helium Ion Beam Microscopy for Copper Grain Identification in BEOL Structures
NASA Astrophysics Data System (ADS)
van den Boom, Ruud J. J.; Parvaneh, Hamed; Voci, Dave; Huynh, Chuong; Stern, Lewis; Dunn, Kathleen A.; Lifshin, Eric
2009-09-01
Grain size determination in advanced metallization structures requires a technique with resolution ˜2 nm, with a high signal-to-noise ratio and high orientation-dependant contrast for unambiguous identification of grain boundaries. Ideally, such a technique would also be capable of high-throughput and rapid time-to-knowledge. The Helium Ion Microscope (HIM) offers one possibility for achieving these aims in a single platform. This article compares the performance of the HIM with Focused Ion Beam, Scanning Electron and Transmission Electron Microscopes, in terms of achievable image resolution and contrast, using plan-view and cross-sectional imaging of electroplated samples. Although the HIM is capable of sub-nanometer beam diameter, the low signal-to-noise ratio in the images necessitates signal averaging, which degrades the measured image resolution to 6-8 nm. Strategies for improving S/N are discussed in light of the trade-off between beam current and probe size, accelerating voltage, and dwell time.
Hwang, Won Ju; Park, Yunhee
2015-12-01
The purpose of this study was to investigate individual and organizational level of cardiovascular disease (CVD) risk factors associated with CVD risk in Korean blue-collar workers working in small sized companies. Self-report questionnaires and blood sampling for lipid and glucose were collected from 492 workers in 31 small sized companies in Korea. Multilevel modeling was conducted to estimate effects of related factors at the individual and organizational level. Multilevel regression analysis showed that workers in the workplace having a cafeteria had 1.81 times higher CVD risk after adjusting for factors at the individual level (p=.022). The explanatory power of variables related to organizational level variances in CVD risk was 17.1%. The results of this study indicate that differences in the CVD risk were related to organizational factors. It is necessary to consider not only individual factors but also organizational factors when planning a CVD risk reduction program. The factors caused by having cafeteria in the workplace can be reduced by improvement in the CVD-related risk environment, therefore an organizational-level intervention approach should be available to reduce CVD risk of workers in small sized companies in Korea.
NASA Astrophysics Data System (ADS)
Royalty, T. M.; Phillips, B.; Dawson, K. W.; Reed, R. E.; Meskhidze, N.
2016-12-01
We report aerosol number size distribution and hygroscopicity data collected over the Pacific Ocean near the Hawaii Ocean Timeseries (HOT) Station ALOHA (centered near 22°N, 158°W). From June 25 to July 3, 2016 our hygroscopicity tandem differential mobility analyzer (HTDMA)/scanning mobility particle sizer (SMPS) system was deployed onboard of NOAA Ship Hi'ialakai that participated in mooring operations associated with the Woods Hole Oceanographic Institution WHOTS project. The ambient aerosol data was collected during the ship's planned operations. The inlet was located at the bow of the ship and the air samples were drawn (using 3/8 inch stainless steel tubing) inside a dry, air-conditioned lab. The region north of Oahu was very clean, with total particle number approximately 200 cm-3, occasionally dropping below 100 cm-3. We compare our particle number size distribution and hygroscopicity data with previously reported estimates. Our measurements contribute to process-level understanding of the role of sea spray aerosol in marine boundary layer cloud condensation nuclei (CCN) budget and provide crucial information to the community interested in studying and projecting climate change using Earth System Models.
Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris
2015-12-30
Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.
Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology
Vavrek, Matthew J.
2015-01-01
Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes. PMID:25780770
DOT National Transportation Integrated Search
2004-05-01
For estimating the system total unlinked passenger trips and passenger miles of a fixed-route bus system for the National Transit Database (NTD), the FTA approved sampling plans may either over-sample or do not yield FTAs required confidence and p...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geist, William H.
2017-09-15
The objectives for this presentation are to describe the method that the IAEA uses to determine a sampling plan for nuclear material measurements; describe the terms detection probability and significant quantity; list the three nuclear materials measurement types; describe the sampling method applied to an item facility; and describe multiple method sampling.
One portion size of foods frequently consumed by Korean adults
Choi, Mi-Kyeong; Hyun, Wha-Jin; Lee, Sim-Yeol; Park, Hong-Ju; Kim, Se-Na
2010-01-01
This study aimed to define a one portion size of food items frequently consumed for convenient use by Koreans in food selection, diet planning, and nutritional evaluation. We analyzed using the original data on 5,436 persons (60.87%) aged 20 ~ 64 years among 8,930 persons to whom NHANES 2005 and selected food items consumed by the intake frequency of 30 or higher among the 500 most frequently consumed food items. A total of 374 varieties of food items of regular use were selected. And the portion size of food items was set on the basis of the median (50th percentile) of the portion size for a single intake by a single person was analyzed. In cereals, the portion size of well polished rice was 80 g. In meats, the portion size of Korean beef cattle was 25 g. Among vegetable items, the portion size of Baechukimchi was 40 g. The portion size of the food items of regular use set in this study will be conveniently and effectively used by general consumers in selecting food items for a nutritionally balanced diet. In addition, these will be used as the basic data in setting the serving size in meal planning. PMID:20198213
The design, physical properties and clinical utility of an iris collimator for robotic radiosurgery
NASA Astrophysics Data System (ADS)
Echner, G. G.; Kilby, W.; Lee, M.; Earnst, E.; Sayeh, S.; Schlaefer, A.; Rhein, B.; Dooley, J. R.; Lang, C.; Blanck, O.; Lessard, E.; Maurer, C. R., Jr.; Schlegel, W.
2009-09-01
Robotic radiosurgery using more than one circular collimator can improve treatment plan quality and reduce total monitor units (MU). The rationale for an iris collimator that allows the field size to be varied during treatment delivery is to enable the benefits of multiple-field-size treatments to be realized with no increase in treatment time due to collimator exchange or multiple traversals of the robotic manipulator by allowing each beam to be delivered with any desired field size during a single traversal. This paper describes the Iris™ variable aperture collimator (Accuray Incorporated, Sunnyvale, CA, USA), which incorporates 12 tungsten-copper alloy segments in two banks of six. The banks are rotated by 30° with respect to each other, which limits the radiation leakage between the collimator segments and produces a 12-sided polygonal treatment beam. The beam is approximately circular, with a root-mean-square (rms) deviation in the 50% dose radius of <0.8% (corresponding to <0.25 mm at the 60 mm field size) and an rms variation in the 20-80% penumbra width of about 0.1 mm at the 5 mm field size increasing to about 0.5 mm at 60 mm. The maximum measured collimator leakage dose rate was 0.07%. A commissioning method is described by which the average dose profile can be obtained from four profile measurements at each depth based on the periodicity of the isodose line variations with azimuthal angle. The penumbra of averaged profiles increased with field size and was typically 0.2-0.6 mm larger than that of an equivalent fixed circular collimator. The aperture reproducibility is <=0.1 mm at the lower bank, diverging to <=0.2 mm at a nominal treatment distance of 800 mm from the beam focus. Output factors (OFs) and tissue-phantom-ratio data are identical to those used for fixed collimators, except the OFs for the two smallest field sizes (5 and 7.5 mm) are considerably lower for the Iris Collimator. If average collimator profiles are used, the assumption of circular symmetry results in dose calculation errors that are <1 mm or <1% for single beams across the full range of field sizes; errors for multiple non-coplanar beam treatment plans are expected to be smaller. Treatment plans were generated for 19 cases using the Iris Collimator (12 field sizes) and also using one and three fixed collimators. The results of the treatment planning study demonstrate that the use of multiple field sizes achieves multiple plan quality improvements, including reduction of total MU, increase of target volume coverage and improvements in conformality and homogeneity compared with using a single field size for a large proportion of the cases studied. The Iris Collimator offers the potential to greatly increase the clinical application of multiple field sizes for robotic radiosurgery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Surovchak, Scott; Miller, Michele
The 2008 Long-Term Surveillance Plan [LTSP] for the Decommissioned Hallam Nuclear Power Facility, Hallam, Nebraska (http://www.lm.doe.gov/Hallam/Documents.aspx) requires groundwater monitoring once every 2 years. Seventeen monitoring wells at the Hallam site were sampled during this event as specified in the plan. Planned monitoring locations are shown in Attachment 1, Sampling and Analysis Work Order. Water levels were measured at all sampled wells and at two additional wells (6A and 6B) prior to the start of sampling. Additionally, water levels of each sampled well were measured at the beginning of sampling. See Attachment 2, Trip Report, for additional details. Sampling and analysismore » were conducted as specified in Sampling and Analysis Plan for U.S. Department of Energy Office of Legacy Management Sites (LMS/PRO/S04351, continually updated, http://energy.gov/lm/downloads/sampling-and-analysis-plan-us-department- energy-office-legacy-management-sites). Gross alpha and gross beta are the only parameters that were detected at statistically significant concentrations. Time/concentration graphs of the gross alpha and gross beta data are included in Attachment 3, Data Presentation. The gross alpha and gross beta activity concentrations observed are consistent with values previously observed and are attributed to naturally occurring radionuclides (e.g., uranium and uranium decay chain products) in the groundwater.« less
Schach Von Wittenau, Alexis E.
2003-01-01
A method is provided to represent the calculated phase space of photons emanating from medical accelerators used in photon teletherapy. The method reproduces the energy distributions and trajectories of the photons originating in the bremsstrahlung target and of photons scattered by components within the accelerator head. The method reproduces the energy and directional information from sources up to several centimeters in radial extent, so it is expected to generalize well to accelerators made by different manufacturers. The method is computationally both fast and efficient overall sampling efficiency of 80% or higher for most field sizes. The computational cost is independent of the number of beams used in the treatment plan.
Pandya, Samta P
2018-05-05
This article is based on a longitudinal study of Indian Americans devoted to a guru tradition, aiming to explore how faith contributes to their mental well-being. Respondent sample size at phase 1 (2003-2004) was 1872 and at phase 2 (2013-2014) was 1764. Two scales were used to measure faith maturity and well-being. Results showed that phase 2 well-being scores of the devotees were higher, influenced by faith maturity and engagement regularity, thereby corroborating the faith-religiosity-well-being link, further reinforced by the structural equation model. Faith emerges as critical variable in working with this cohort and planning interventions towards promoting their well-being.
Improving the accuracy of livestock distribution estimates through spatial interpolation.
Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy
2012-11-01
Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.
Biostatistics Series Module 5: Determining Sample Size
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437
Sample size and power for cost-effectiveness analysis (part 1).
Glick, Henry A
2011-03-01
Basic sample size and power formulae for cost-effectiveness analysis have been established in the literature. These formulae are reviewed and the similarities and differences between sample size and power for cost-effectiveness analysis and for the analysis of other continuous variables such as changes in blood pressure or weight are described. The types of sample size and power tables that are commonly calculated for cost-effectiveness analysis are also described and the impact of varying the assumed parameter values on the resulting sample size and power estimates is discussed. Finally, the way in which the data for these calculations may be derived are discussed.
Estimation of sample size and testing power (Part 4).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2012-01-01
Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.
Hogg, Oliver T; Huvenne, Veerle A I; Griffiths, Huw J; Linse, Katrin
2018-06-01
In recent years very large marine protected areas (VLMPAs) have become the dominant form of spatial protection in the marine environment. Whilst seen as a holistic and geopolitically achievable approach to conservation, there is currently a mismatch between the size of VLMPAs, and the data available to underpin their establishment and inform on their management. Habitat mapping has increasingly been adopted as a means of addressing paucity in biological data, through use of environmental proxies to estimate species and community distribution. Small-scale studies have demonstrated environmental-biological links in marine systems. Such links, however, are rarely demonstrated across larger spatial scales in the benthic environment. As such, the utility of habitat mapping as an effective approach to the ecosystem-based management of VLMPAs remains, thus far, largely undetermined. The aim of this study was to assess the ecological relevance of broadscale landscape mapping. Specifically we test the relationship between broad-scale marine landscapes and the structure of their benthic faunal communities. We focussed our work at the sub-Antarctic island of South Georgia, site of one of the largest MPAs in the world. We demonstrate a statistically significant relationship between environmentally derived landscape mapping clusters, and the composition of presence-only species data from the region. To demonstrate this relationship required specific re-sampling of historical species occurrence data to balance biological rarity, biological cosmopolitism, range-restricted sampling and fine-scale heterogeneity between sampling stations. The relationship reveals a distinct biological signature in the faunal composition of individual landscapes, attributing ecological relevance to South Georgia's environmentally derived marine landscape map. We argue therefore, that landscape mapping represents an effective framework for ensuring representative protection of habitats in management plans. Such scientific underpinning of marine spatial planning is critical in balancing the needs of multiple stakeholders whilst maximising conservation payoff. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.