Sample records for sample design results

  1. Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries

    USGS Publications Warehouse

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2013-01-01

    Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.

  2. Biological effect of low-head sea lamprey barriers: Designs for extensive surveys and the value of incorporating intensive process-oriented research

    USGS Publications Warehouse

    Hayes, D.B.; Baylis, J.R.; Carl, L.M.; Dodd, H.R.; Goldstein, J.D.; McLaughlin, R.L.; Noakes, D.L.G.; Porto, L.M.

    2003-01-01

    Four sampling designs for quantifying the effect of low-head sea lamprey (Petromyzon marinus) barriers on fish communities were evaluated, and the contribution of process-oriented research to the overall confidence of results obtained was discussed. The designs include: (1) sample barrier streams post-construction; (2) sample barrier and reference streams post-construction; (3) sample barrier streams pre- and post-construction; and (4) sample barrier and reference streams pre- and post-construction. In the statistical literature, the principal basis for comparison of sampling designs is generally the precision achieved by each design. In addition to precision, designs should be compared based on the interpretability of results and on the scale to which the results apply. Using data collected in a broad survey of streams with and without sea lamprey barriers, some of the tradeoffs that occur among precision, scale, and interpretability are illustrated. Although circumstances such as funding and availability of pre-construction data may limit which design can be implemented, a pre/post-construction design including barrier and reference streams provides the most meaningful information for use in barrier management decisions. Where it is not feasible to obtain pre-construction data, a design including reference streams is important to maintain the interpretability of results. Regardless of the design used, process-oriented research provides a framework for interpreting results obtained in broad surveys. As such, information from both extensive surveys and intensive process-oriented research provides the best basis for fishery management actions, and gives researchers and managers the most confidence in the conclusions reached regarding the effects of sea lamprey barriers.

  3. Designing testing service at baristand industri Medan’s liquid waste laboratory

    NASA Astrophysics Data System (ADS)

    Kusumawaty, Dewi; Napitupulu, Humala L.; Sembiring, Meilita T.

    2018-03-01

    Baristand Industri Medan is a technical implementation unit under the Industrial and Research and Development Agency, the Ministry of Industry. One of the services often used in Baristand Industri Medan is liquid waste testing service. The company set the standard of service is nine working days for testing services. At 2015, 89.66% on testing services liquid waste does not meet the specified standard of services company because of many samples accumulated. The purpose of this research is designing online services to schedule the coming the liquid waste sample. The method used is designing an information system that consists of model design, output design, input design, database design and technology design. The results of designing information system of testing liquid waste online consist of three pages are pages to the customer, the recipient samples and laboratory. From the simulation results with scheduled samples, then the standard services a minimum of nine working days can be reached.

  4. Estimation of AUC or Partial AUC under Test-Result-Dependent Sampling.

    PubMed

    Wang, Xiaofei; Ma, Junling; George, Stephen; Zhou, Haibo

    2012-01-01

    The area under the ROC curve (AUC) and partial area under the ROC curve (pAUC) are summary measures used to assess the accuracy of a biomarker in discriminating true disease status. The standard sampling approach used in biomarker validation studies is often inefficient and costly, especially when ascertaining the true disease status is costly and invasive. To improve efficiency and reduce the cost of biomarker validation studies, we consider a test-result-dependent sampling (TDS) scheme, in which subject selection for determining the disease state is dependent on the result of a biomarker assay. We first estimate the test-result distribution using data arising from the TDS design. With the estimated empirical test-result distribution, we propose consistent nonparametric estimators for AUC and pAUC and establish the asymptotic properties of the proposed estimators. Simulation studies show that the proposed estimators have good finite sample properties and that the TDS design yields more efficient AUC and pAUC estimates than a simple random sampling (SRS) design. A data example based on an ongoing cancer clinical trial is provided to illustrate the TDS design and the proposed estimators. This work can find broad applications in design and analysis of biomarker validation studies.

  5. A BASIS FOR MODIFYING THE TANK 12 COMPOSITE SAMPLING DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shine, G.

    The SRR sampling campaign to obtain residual solids material from the Savannah River Site (SRS) Tank Farm Tank 12 primary vessel resulted in obtaining appreciable material in all 6 planned source samples from the mound strata but only in 5 of the 6 planned source samples from the floor stratum. Consequently, the design of the compositing scheme presented in the Tank 12 Sampling and Analysis Plan, Pavletich (2014a), must be revised. Analytical Development of SRNL statistically evaluated the sampling uncertainty associated with using various compositing arrays and splitting one or more samples for compositing. The variance of the simple meanmore » of composite sample concentrations is a reasonable standard to investigate the impact of the following sampling options. Composite Sample Design Option (a). Assign only 1 source sample from the floor stratum and 1 source sample from each of the mound strata to each of the composite samples. Each source sample contributes material to only 1 composite sample. Two source samples from the floor stratum would not be used. Composite Sample Design Option (b). Assign 2 source samples from the floor stratum and 1 source sample from each of the mound strata to each composite sample. This infers that one source sample from the floor must be used twice, with 2 composite samples sharing material from this particular source sample. All five source samples from the floor would be used. Composite Sample Design Option (c). Assign 3 source samples from the floor stratum and 1 source sample from each of the mound strata to each composite sample. This infers that several of the source samples from the floor stratum must be assigned to more than one composite sample. All 5 source samples from the floor would be used. Using fewer than 12 source samples will increase the sampling variability over that of the Basic Composite Sample Design, Pavletich (2013). Considering the impact to the variance of the simple mean of the composite sample concentrations, the recommendation is to construct each sample composite using four or five source samples. Although the variance using 5 source samples per composite sample (Composite Sample Design Option (c)) was slightly less than the variance using 4 source samples per composite sample (Composite Sample Design Option (b)), there is no practical difference between those variances. This does not consider that the measurement error variance, which is the same for all composite sample design options considered in this report, will further dilute any differences. Composite Sample Design Option (a) had the largest variance for the mean concentration in the three composite samples and should be avoided. These results are consistent with Pavletich (2014b) which utilizes a low elevation and a high elevation mound source sample and two floor source samples for each composite sample. Utilizing the four source samples per composite design, Pavletich (2014b) utilizes aliquots of Floor Sample 4 for two composite samples.« less

  6. Analysis of phosphorus trends and evaluation of sampling designs in the Quinebaug River Basin, Connecticut

    USGS Publications Warehouse

    Todd Trench, Elaine C.

    2004-01-01

    A time-series analysis approach developed by the U.S. Geological Survey was used to analyze trends in total phosphorus and evaluate optimal sampling designs for future trend detection, using long-term data for two water-quality monitoring stations on the Quinebaug River in eastern Connecticut. Trend-analysis results for selected periods of record during 1971?2001 indicate that concentrations of total phosphorus in the Quinebaug River have varied over time, but have decreased significantly since the 1970s and 1980s. Total phosphorus concentrations at both stations increased in the late 1990s and early 2000s, but were still substantially lower than historical levels. Drainage areas for both stations are primarily forested, but water quality at both stations is affected by point discharges from municipal wastewater-treatment facilities. Various designs with sampling frequencies ranging from 4 to 11 samples per year were compared to the trend-detection power of the monthly (12-sample) design to determine the most efficient configuration of months to sample for a given annual sampling frequency. Results from this evaluation indicate that the current (2004) 8-sample schedule for the two Quinebaug stations, with monthly sampling from May to September and bimonthly sampling for the remainder of the year, is not the most efficient 8-sample design for future detection of trends in total phosphorus. Optimal sampling schedules for the two stations differ, but in both cases, trend-detection power generally is greater among 8-sample designs that include monthly sampling in fall and winter. Sampling designs with fewer than 8 samples per year generally provide a low level of probability for detection of trends in total phosphorus. Managers may determine an acceptable level of probability for trend detection within the context of the multiple objectives of the state?s water-quality management program and the scientific understanding of the watersheds in question. Managers may identify a threshold of probability for trend detection that is high enough to justify the agency?s investment in the water-quality sampling program. Results from an analysis of optimal sampling designs can provide an important component of information for the decision-making process in which sampling schedules are periodically reviewed and revised. Results from the study described in this report and previous studies indicate that optimal sampling schedules for trend detection may differ substantially for different stations and constituents. A more comprehensive statewide evaluation of sampling schedules for key stations and constituents could provide useful information for any redesign of the schedule for water-quality monitoring in the Quinebaug River Basin and elsewhere in the state.

  7. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation.

    PubMed

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-04-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.

  8. Type I error probabilities based on design-stage strategies with applications to noninferiority trials.

    PubMed

    Rothmann, Mark

    2005-01-01

    When testing the equality of means from two different populations, a t-test or large sample normal test tend to be performed. For these tests, when the sample size or design for the second sample is dependent on the results of the first sample, the type I error probability is altered for each specific possibility in the null hypothesis. We will examine the impact on the type I error probabilities for two confidence interval procedures and procedures using test statistics when the design for the second sample or experiment is dependent on the results from the first sample or experiment (or series of experiments). Ways for controlling a desired maximum type I error probability or a desired type I error rate will be discussed. Results are applied to the setting of noninferiority comparisons in active controlled trials where the use of a placebo is unethical.

  9. Design-based and model-based inference in surveys of freshwater mollusks

    USGS Publications Warehouse

    Dorazio, R.M.

    1999-01-01

    Well-known concepts in statistical inference and sampling theory are used to develop recommendations for planning and analyzing the results of quantitative surveys of freshwater mollusks. Two methods of inference commonly used in survey sampling (design-based and model-based) are described and illustrated using examples relevant in surveys of freshwater mollusks. The particular objectives of a survey and the type of information observed in each unit of sampling can be used to help select the sampling design and the method of inference. For example, the mean density of a sparsely distributed population of mollusks can be estimated with higher precision by using model-based inference or by using design-based inference with adaptive cluster sampling than by using design-based inference with conventional sampling. More experience with quantitative surveys of natural assemblages of freshwater mollusks is needed to determine the actual benefits of different sampling designs and inferential procedures.

  10. Designing occupancy studies: general advice and allocating survey effort

    USGS Publications Warehouse

    MacKenzie, D.I.; Royle, J. Andrew

    2005-01-01

    1. The fraction of sampling units in a landscape where a target species is present (occupancy) is an extensively used concept in ecology. Yet in many applications the species will not always be detected in a sampling unit even when present, resulting in biased estimates of occupancy. Given that sampling units are surveyed repeatedly within a relatively short timeframe, a number of similar methods have now been developed to provide unbiased occupancy estimates. However, practical guidance on the efficient design of occupancy studies has been lacking. 2. In this paper we comment on a number of general issues related to designing occupancy studies, including the need for clear objectives that are explicitly linked to science or management, selection of sampling units, timing of repeat surveys and allocation of survey effort. Advice on the number of repeat surveys per sampling unit is considered in terms of the variance of the occupancy estimator, for three possible study designs. 3. We recommend that sampling units should be surveyed a minimum of three times when detection probability is high (> 0.5 survey-1), unless a removal design is used. 4. We found that an optimal removal design will generally be the most efficient, but we suggest it may be less robust to assumption violations than a standard design. 5. Our results suggest that for a rare species it is more efficient to survey more sampling units less intensively, while for a common species fewer sampling units should be surveyed more intensively. 6. Synthesis and applications. Reliable inferences can only result from quality data. To make the best use of logistical resources, study objectives must be clearly defined; sampling units must be selected, and repeated surveys timed appropriately; and a sufficient number of repeated surveys must be conducted. Failure to do so may compromise the integrity of the study. The guidance given here on study design issues is particularly applicable to studies of species occurrence and distribution, habitat selection and modelling, metapopulation studies and monitoring programmes.

  11. Implications of sampling design and sample size for national carbon accounting systems.

    PubMed

    Köhl, Michael; Lister, Andrew; Scott, Charles T; Baldauf, Thomas; Plugge, Daniel

    2011-11-08

    Countries willing to adopt a REDD regime need to establish a national Measurement, Reporting and Verification (MRV) system that provides information on forest carbon stocks and carbon stock changes. Due to the extensive areas covered by forests the information is generally obtained by sample based surveys. Most operational sampling approaches utilize a combination of earth-observation data and in-situ field assessments as data sources. We compared the cost-efficiency of four different sampling design alternatives (simple random sampling, regression estimators, stratified sampling, 2-phase sampling with regression estimators) that have been proposed in the scope of REDD. Three of the design alternatives provide for a combination of in-situ and earth-observation data. Under different settings of remote sensing coverage, cost per field plot, cost of remote sensing imagery, correlation between attributes quantified in remote sensing and field data, as well as population variability and the percent standard error over total survey cost was calculated. The cost-efficiency of forest carbon stock assessments is driven by the sampling design chosen. Our results indicate that the cost of remote sensing imagery is decisive for the cost-efficiency of a sampling design. The variability of the sample population impairs cost-efficiency, but does not reverse the pattern of cost-efficiency of the individual design alternatives. Our results clearly indicate that it is important to consider cost-efficiency in the development of forest carbon stock assessments and the selection of remote sensing techniques. The development of MRV-systems for REDD need to be based on a sound optimization process that compares different data sources and sampling designs with respect to their cost-efficiency. This helps to reduce the uncertainties related with the quantification of carbon stocks and to increase the financial benefits from adopting a REDD regime.

  12. An investigation of the effects of relevant samples and a comparison of verification versus discovery based lab design

    NASA Astrophysics Data System (ADS)

    Rieben, James C., Jr.

    This study focuses on the effects of relevance and lab design on student learning within the chemistry laboratory environment. A general chemistry conductivity of solutions experiment and an upper level organic chemistry cellulose regeneration experiment were employed. In the conductivity experiment, the two main variables studied were the effect of relevant (or "real world") samples on student learning and a verification-based lab design versus a discovery-based lab design. With the cellulose regeneration experiment, the effect of a discovery-based lab design vs. a verification-based lab design was the sole focus. Evaluation surveys consisting of six questions were used at three different times to assess student knowledge of experimental concepts. In the general chemistry laboratory portion of this study, four experimental variants were employed to investigate the effect of relevance and lab design on student learning. These variants consisted of a traditional (or verification) lab design, a traditional lab design using "real world" samples, a new lab design employing real world samples/situations using unknown samples, and the new lab design using real world samples/situations that were known to the student. Data used in this analysis were collected during the Fall 08, Winter 09, and Fall 09 terms. For the second part of this study a cellulose regeneration experiment was employed to investigate the effects of lab design. A demonstration creating regenerated cellulose "rayon" was modified and converted to an efficient and low-waste experiment. In the first variant students tested their products and verified a list of physical properties. In the second variant, students filled in a blank physical property chart with their own experimental results for the physical properties. Results from the conductivity experiment show significant student learning of the effects of concentration on conductivity and how to use conductivity to differentiate solution types with the use of real world samples. In the organic chemistry experiment, results suggest that the discovery-based design improved student retention of the chain length differentiation by physical properties relative to the verification-based design.

  13. Optimal two-phase sampling design for comparing accuracies of two binary classification rules.

    PubMed

    Xu, Huiping; Hui, Siu L; Grannis, Shaun

    2014-02-10

    In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms. Copyright © 2013 John Wiley & Sons, Ltd.

  14. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    NASA Astrophysics Data System (ADS)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this relative improvement decreases with increasing number of sample points and input parameter dimensions. Since the computational time and efforts for generating the sample designs in the two approaches are identical, the use of midpoint LHS as the initial design in OLHS is thus recommended.

  16. Using variance components to estimate power in a hierarchically nested sampling design improving monitoring of larval Devils Hole pupfish

    USGS Publications Warehouse

    Dzul, Maria C.; Dixon, Philip M.; Quist, Michael C.; Dinsomore, Stephen J.; Bower, Michael R.; Wilson, Kevin P.; Gaines, D. Bailey

    2013-01-01

    We used variance components to assess allocation of sampling effort in a hierarchically nested sampling design for ongoing monitoring of early life history stages of the federally endangered Devils Hole pupfish (DHP) (Cyprinodon diabolis). Sampling design for larval DHP included surveys (5 days each spring 2007–2009), events, and plots. Each survey was comprised of three counting events, where DHP larvae on nine plots were counted plot by plot. Statistical analysis of larval abundance included three components: (1) evaluation of power from various sample size combinations, (2) comparison of power in fixed and random plot designs, and (3) assessment of yearly differences in the power of the survey. Results indicated that increasing the sample size at the lowest level of sampling represented the most realistic option to increase the survey's power, fixed plot designs had greater power than random plot designs, and the power of the larval survey varied by year. This study provides an example of how monitoring efforts may benefit from coupling variance components estimation with power analysis to assess sampling design.

  17. 76 FR 13436 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-11

    ... target population to which generalizations will be made, the sampling frame, the sample design (including... for submission for other generic mechanisms that are designed to yield quantitative results. The MSPB... insights on perceptions and opinions, but are not statistical surveys that yield quantitative results that...

  18. Validation of Statistical Sampling Algorithms in Visual Sample Plan (VSP): Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuffer, Lisa L; Sego, Landon H.; Wilson, John E.

    2009-02-18

    The U.S. Department of Homeland Security, Office of Technology Development (OTD) contracted with a set of U.S. Department of Energy national laboratories, including the Pacific Northwest National Laboratory (PNNL), to write a Remediation Guidance for Major Airports After a Chemical Attack. The report identifies key activities and issues that should be considered by a typical major airport following an incident involving release of a toxic chemical agent. Four experimental tasks were identified that would require further research in order to supplement the Remediation Guidance. One of the tasks, Task 4, OTD Chemical Remediation Statistical Sampling Design Validation, dealt with statisticalmore » sampling algorithm validation. This report documents the results of the sampling design validation conducted for Task 4. In 2005, the Government Accountability Office (GAO) performed a review of the past U.S. responses to Anthrax terrorist cases. Part of the motivation for this PNNL report was a major GAO finding that there was a lack of validated sampling strategies in the U.S. response to Anthrax cases. The report (GAO 2005) recommended that probability-based methods be used for sampling design in order to address confidence in the results, particularly when all sample results showed no remaining contamination. The GAO also expressed a desire that the methods be validated, which is the main purpose of this PNNL report. The objective of this study was to validate probability-based statistical sampling designs and the algorithms pertinent to within-building sampling that allow the user to prescribe or evaluate confidence levels of conclusions based on data collected as guided by the statistical sampling designs. Specifically, the designs found in the Visual Sample Plan (VSP) software were evaluated. VSP was used to calculate the number of samples and the sample location for a variety of sampling plans applied to an actual release site. Most of the sampling designs validated are probability based, meaning samples are located randomly (or on a randomly placed grid) so no bias enters into the placement of samples, and the number of samples is calculated such that IF the amount and spatial extent of contamination exceeds levels of concern, at least one of the samples would be taken from a contaminated area, at least X% of the time. Hence, "validation" of the statistical sampling algorithms is defined herein to mean ensuring that the "X%" (confidence) is actually met.« less

  19. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation

    PubMed Central

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-01-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037

  20. Efficiently estimating salmon escapement uncertainty using systematically sampled data

    USGS Publications Warehouse

    Reynolds, Joel H.; Woody, Carol Ann; Gove, Nancy E.; Fair, Lowell F.

    2007-01-01

    Fish escapement is generally monitored using nonreplicated systematic sampling designs (e.g., via visual counts from towers or hydroacoustic counts). These sampling designs support a variety of methods for estimating the variance of the total escapement. Unfortunately, all the methods give biased results, with the magnitude of the bias being determined by the underlying process patterns. Fish escapement commonly exhibits positive autocorrelation and nonlinear patterns, such as diurnal and seasonal patterns. For these patterns, poor choice of variance estimator can needlessly increase the uncertainty managers have to deal with in sustaining fish populations. We illustrate the effect of sampling design and variance estimator choice on variance estimates of total escapement for anadromous salmonids from systematic samples of fish passage. Using simulated tower counts of sockeye salmon Oncorhynchus nerka escapement on the Kvichak River, Alaska, five variance estimators for nonreplicated systematic samples were compared to determine the least biased. Using the least biased variance estimator, four confidence interval estimators were compared for expected coverage and mean interval width. Finally, five systematic sampling designs were compared to determine the design giving the smallest average variance estimate for total annual escapement. For nonreplicated systematic samples of fish escapement, all variance estimators were positively biased. Compared to the other estimators, the least biased estimator reduced bias by, on average, from 12% to 98%. All confidence intervals gave effectively identical results. Replicated systematic sampling designs consistently provided the smallest average estimated variance among those compared.

  1. A robust variable sampling time BLDC motor control design based upon μ-synthesis.

    PubMed

    Hung, Chung-Wen; Yen, Jia-Yush

    2013-01-01

    The variable sampling rate system is encountered in many applications. When the speed information is derived from the position marks along the trajectory, one would have a speed dependent sampling rate system. The conventional fixed or multisampling rate system theory may not work in these cases because the system dynamics include the uncertainties which resulted from the variable sampling rate. This paper derived a convenient expression for the speed dependent sampling rate system. The varying sampling rate effect is then translated into multiplicative uncertainties to the system. The design then uses the popular μ-synthesis process to achieve a robust performance controller design. The implementation on a BLDC motor demonstrates the effectiveness of the design approach.

  2. A Robust Variable Sampling Time BLDC Motor Control Design Based upon μ-Synthesis

    PubMed Central

    Yen, Jia-Yush

    2013-01-01

    The variable sampling rate system is encountered in many applications. When the speed information is derived from the position marks along the trajectory, one would have a speed dependent sampling rate system. The conventional fixed or multisampling rate system theory may not work in these cases because the system dynamics include the uncertainties which resulted from the variable sampling rate. This paper derived a convenient expression for the speed dependent sampling rate system. The varying sampling rate effect is then translated into multiplicative uncertainties to the system. The design then uses the popular μ-synthesis process to achieve a robust performance controller design. The implementation on a BLDC motor demonstrates the effectiveness of the design approach. PMID:24327804

  3. Evaluation of Cooling Conditions for a High Heat Flux Testing Facility Based on Plasma-Arc Lamps

    DOE PAGES

    Charry, Carlos H.; Abdel-khalik, Said I.; Yoda, Minami; ...

    2015-07-31

    The new Irradiated Material Target Station (IMTS) facility for fusion materials at Oak Ridge National Laboratory (ORNL) uses an infrared plasma-arc lamp (PAL) to deliver incident heat fluxes as high as 27 MW/m 2. The facility is being used to test irradiated plasma-facing component materials as part of the joint US-Japan PHENIX program. The irradiated samples are to be mounted on molybdenum sample holders attached to a water-cooled copper rod. Depending on the size and geometry of samples, several sample holders and copper rod configurations have been fabricated and tested. As a part of the effort to design sample holdersmore » compatible with the high heat flux (HHF) testing to be conducted at the IMTS facility, numerical simulations have been performed for two different water-cooled sample holder designs using the ANSYS FLUENT 14.0 commercial computational fluid dynamics (CFD) software package. The primary objective of this work is to evaluate the cooling capability of different sample holder designs, i.e. to estimate their maximum allowable incident heat flux values. 2D axisymmetric numerical simulations are performed using the realizable k-ε turbulence model and the RPI nucleate boiling model within ANSYS FLUENT 14.0. The results of the numerical model were compared against the experimental data for two sample holder designs tested in the IMTS facility. The model has been used to parametrically evaluate the effect of various operational parameters on the predicted temperature distributions. The results were used to identify the limiting parameter for safe operation of the two sample holders and the associated peak heat flux limits. The results of this investigation will help guide the development of new sample holder designs.« less

  4. Extending cluster Lot Quality Assurance Sampling designs for surveillance programs

    PubMed Central

    Hund, Lauren; Pagano, Marcello

    2014-01-01

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance based on the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible non-parametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. PMID:24633656

  5. Multi-point objective-oriented sequential sampling strategy for constrained robust design

    NASA Astrophysics Data System (ADS)

    Zhu, Ping; Zhang, Siliang; Chen, Wei

    2015-03-01

    Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.

  6. Extending cluster lot quality assurance sampling designs for surveillance programs.

    PubMed

    Hund, Lauren; Pagano, Marcello

    2014-07-20

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance on the basis of the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible nonparametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. Copyright © 2014 John Wiley & Sons, Ltd.

  7. A Simple and Robust Method for Partially Matched Samples Using the P-Values Pooling Approach

    PubMed Central

    Kuan, Pei Fen; Huang, Bo

    2013-01-01

    This paper focuses on statistical analyses in scenarios where some samples from the matched pairs design are missing, resulting in partially matched samples. Motivated by the idea of meta-analysis, we recast the partially matched samples as coming from two experimental designs, and propose a simple yet robust approach based on the weighted Z-test to integrate the p-values computed from these two designs. We show that the proposed approach achieves better operating characteristics in simulations and a case study, compared to existing methods for partially matched samples. PMID:23417968

  8. Optimization of the intravenous glucose tolerance test in T2DM patients using optimal experimental design.

    PubMed

    Silber, Hanna E; Nyberg, Joakim; Hooker, Andrew C; Karlsson, Mats O

    2009-06-01

    Intravenous glucose tolerance test (IVGTT) provocations are informative, but complex and laborious, for studying the glucose-insulin system. The objective of this study was to evaluate, through optimal design methodology, the possibilities of more informative and/or less laborious study design of the insulin modified IVGTT in type 2 diabetic patients. A previously developed model for glucose and insulin regulation was implemented in the optimal design software PopED 2.0. The following aspects of the study design of the insulin modified IVGTT were evaluated; (1) glucose dose, (2) insulin infusion, (3) combination of (1) and (2), (4) sampling times, (5) exclusion of labeled glucose. Constraints were incorporated to avoid prolonged hyper- and/or hypoglycemia and a reduced design was used to decrease run times. Design efficiency was calculated as a measure of the improvement with an optimal design compared to the basic design. The results showed that the design of the insulin modified IVGTT could be substantially improved by the use of an optimized design compared to the standard design and that it was possible to use a reduced number of samples. Optimization of sample times gave the largest improvement followed by insulin dose. The results further showed that it was possible to reduce the total sample time with only a minor loss in efficiency. Simulations confirmed the predictions from PopED. The predicted uncertainty of parameter estimates (CV) was low in all tested cases, despite the reduction in the number of samples/subject. The best design had a predicted average CV of parameter estimates of 19.5%. We conclude that improvement can be made to the design of the insulin modified IVGTT and that the most important design factor was the placement of sample times followed by the use of an optimal insulin dose. This paper illustrates how complex provocation experiments can be improved by sequential modeling and optimal design.

  9. Multi-Mission System Analysis for Planetary Entry (M-SAPE) Version 1

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid; Glaab, Louis; Winski, Richard G.; Maddock, Robert W.; Emmett, Anjie L.; Munk, Michelle M.; Agrawal, Parul; Sepka, Steve; Aliaga, Jose; Zarchi, Kerry; hide

    2014-01-01

    This report describes an integrated system for Multi-mission System Analysis for Planetary Entry (M-SAPE). The system in its current form is capable of performing system analysis and design for an Earth entry vehicle suitable for sample return missions. The system includes geometry, mass sizing, impact analysis, structural analysis, flight mechanics, TPS, and a web portal for user access. The report includes details of M-SAPE modules and provides sample results. Current M-SAPE vehicle design concept is based on Mars sample return (MSR) Earth entry vehicle design, which is driven by minimizing risk associated with sample containment (no parachute and passive aerodynamic stability). By M-SAPE exploiting a common design concept, any sample return mission, particularly MSR, will benefit from significant risk and development cost reductions. The design provides a platform by which technologies and design elements can be evaluated rapidly prior to any costly investment commitment.

  10. A new x-ray interface and surface scattering environmental cell design for in situ studies of radioactive and atmosphere-sensitive samples.

    PubMed

    Schmidt, M; Eng, P J; Stubbs, J E; Fenter, P; Soderholm, L

    2011-07-01

    We present a novel design of a purpose-built, portable sample cell for in situ x-ray scattering experiments of radioactive or atmosphere sensitive samples. The cell has a modular design that includes two independent layers of containment that are used simultaneously to isolate the sensitive samples. Both layers of containment can be flushed with an inert gas, thus serving a double purpose as containment of radiological material (either as a solid sample or as a liquid phase) and in separating reactive samples from the ambient atmosphere. A remote controlled solution flow system is integrated into the containment system that allows sorption experiments to be performed on the diffractometer. The cell's design is discussed in detail and we demonstrate the cell's performance by presenting first results of crystal truncation rod measurements. The results were obtained from muscovite mica single crystals reacted with 1 mM solutions of Th(IV) with 0.1 M NaCl background electrolyte. Data were obtained in specular as well as off-specular geometry.

  11. Sampling design for groundwater solute transport: Tests of methods and analysis of Cape Cod tracer test data

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.; Garabedian, Stephen P.

    1991-01-01

    Tests of a one-dimensional sampling design methodology on measurements of bromide concentration collected during the natural gradient tracer test conducted by the U.S. Geological Survey on Cape Cod, Massachusetts, demonstrate its efficacy for field studies of solute transport in groundwater and the utility of one-dimensional analysis. The methodology was applied to design of sparse two-dimensional networks of fully screened wells typical of those often used in engineering practice. In one-dimensional analysis, designs consist of the downstream distances to rows of wells oriented perpendicular to the groundwater flow direction and the timing of sampling to be carried out on each row. The power of a sampling design is measured by its effectiveness in simultaneously meeting objectives of model discrimination, parameter estimation, and cost minimization. One-dimensional models of solute transport, differing in processes affecting the solute and assumptions about the structure of the flow field, were considered for description of tracer cloud migration. When fitting each model using nonlinear regression, additive and multiplicative error forms were allowed for the residuals which consist of both random and model errors. The one-dimensional single-layer model of a nonreactive solute with multiplicative error was judged to be the best of those tested. Results show the efficacy of the methodology in designing sparse but powerful sampling networks. Designs that sample five rows of wells at five or fewer times in any given row performed as well for model discrimination as the full set of samples taken up to eight times in a given row from as many as 89 rows. Also, designs for parameter estimation judged to be good by the methodology were as effective in reducing the variance of parameter estimates as arbitrary designs with many more samples. Results further showed that estimates of velocity and longitudinal dispersivity in one-dimensional models based on data from only five rows of fully screened wells each sampled five or fewer times were practically equivalent to values determined from moments analysis of the complete three-dimensional set of 29,285 samples taken during 16 sampling times.

  12. Design and Field Procedures in the US National Comorbidity Survey Replication Adolescent Supplement (NCS-A)

    PubMed Central

    Kessler, Ronald C.; Avenevoli, Shelli; Costello, E. Jane; Green, Jennifer Greif; Gruber, Michael J.; Heeringa, Steven; Merikangas, Kathleen R.; Pennell, Beth-Ellen; Sampson, Nancy A.; Zaslavsky, Alan M.

    2009-01-01

    An overview is presented of the design and field procedures of the US National Comorbidity Survey Replication Adolescent Supplement (NCS-A), a US face-to-face household survey of the prevalence and correlates of DSM-IV mental disorders. The survey was based on a dual-frame design that included 904 adolescent residents of the households that participated in the US National Comorbidity Survey Replication (85.9% response rate) and 9,244 adolescent students selected from a nationally representative sample of 320 schools (74.7% response rate). After expositing the logic of dual-frame designs, comparisons are presented of sample and population distributions on Census socio-demographic variables and, in the school sample, school characteristics. These document only minor differences between the samples and the population. The results of statistical analysis of the bias-efficiency trade-off in weight trimming are then presented. These show that modest trimming meaningfully reduces mean squared error. Analysis of comparative sample efficiency shows that the household sample is more efficient than the school sample, leading to the household sample getting a higher weight relative to its size in the consolidated sample relative to the school sample. Taken together, these results show that the NCS-A is an efficient sample of the target population with good representativeness on a range of socio-demographic and geographic variables. PMID:19507169

  13. Impact of Design Effects in Large-Scale District and State Assessments

    ERIC Educational Resources Information Center

    Phillips, Gary W.

    2015-01-01

    This article proposes that sampling design effects have potentially huge unrecognized impacts on the results reported by large-scale district and state assessments in the United States. When design effects are unrecognized and unaccounted for they lead to underestimating the sampling error in item and test statistics. Underestimating the sampling…

  14. Occupancy Modeling Species-Environment Relationships with Non-ignorable Survey Designs.

    PubMed

    Irvine, Kathryn M; Rodhouse, Thomas J; Wright, Wilson J; Olsen, Anthony R

    2018-05-26

    Statistical models supporting inferences about species occurrence patterns in relation to environmental gradients are fundamental to ecology and conservation biology. A common implicit assumption is that the sampling design is ignorable and does not need to be formally accounted for in analyses. The analyst assumes data are representative of the desired population and statistical modeling proceeds. However, if datasets from probability and non-probability surveys are combined or unequal selection probabilities are used, the design may be non ignorable. We outline the use of pseudo-maximum likelihood estimation for site-occupancy models to account for such non-ignorable survey designs. This estimation method accounts for the survey design by properly weighting the pseudo-likelihood equation. In our empirical example, legacy and newer randomly selected locations were surveyed for bats to bridge a historic statewide effort with an ongoing nationwide program. We provide a worked example using bat acoustic detection/non-detection data and show how analysts can diagnose whether their design is ignorable. Using simulations we assessed whether our approach is viable for modeling datasets composed of sites contributed outside of a probability design Pseudo-maximum likelihood estimates differed from the usual maximum likelihood occu31 pancy estimates for some bat species. Using simulations we show the maximum likelihood estimator of species-environment relationships with non-ignorable sampling designs was biased, whereas the pseudo-likelihood estimator was design-unbiased. However, in our simulation study the designs composed of a large proportion of legacy or non-probability sites resulted in estimation issues for standard errors. These issues were likely a result of highly variable weights confounded by small sample sizes (5% or 10% sampling intensity and 4 revisits). Aggregating datasets from multiple sources logically supports larger sample sizes and potentially increases spatial extents for statistical inferences. Our results suggest that ignoring the mechanism for how locations were selected for data collection (e.g., the sampling design) could result in erroneous model-based conclusions. Therefore, in order to ensure robust and defensible recommendations for evidence-based conservation decision-making, the survey design information in addition to the data themselves must be available for analysts. Details for constructing the weights used in estimation and code for implementation are provided. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  15. The Perils of Ignoring Design Effects in Experimental Studies: Lessons from a Mammography Screening Trial

    PubMed Central

    Glenn, Beth A.; Bastani, Roshan; Maxwell, Annette E.

    2013-01-01

    Objective Threats to external validity including pretest sensitization and the interaction of selection and an intervention are frequently overlooked by researchers despite their potential to significantly influence study outcomes. The purpose of this investigation was to conduct secondary data analyses to assess the presence of external validity threats in the setting of a randomized trial designed to promote mammography use in a high risk sample of women. Design During the trial, recruitment and intervention implementation took place in three cohorts (with different ethnic composition), utilizing two different designs (pretest-posttest control group design; posttest only control group design). Results Results reveal that the intervention produced different outcomes across cohorts, dependent upon the research design used and the characteristics of the sample. Conclusion These results illustrate the importance of weighing the pros and cons of potential research designs before making a selection and attending more closely to issues of external validity. PMID:23289517

  16. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys

    PubMed Central

    Hund, Lauren; Bedrick, Edward J.; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis. PMID:26125967

  17. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    PubMed

    Hund, Lauren; Bedrick, Edward J; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  18. A comparison of two sampling designs for fish assemblage assessment in a large river

    USGS Publications Warehouse

    Kiraly, Ian A.; Coghlan, Stephen M.; Zydlewski, Joseph D.; Hayes, Daniel

    2014-01-01

    We compared the efficiency of stratified random and fixed-station sampling designs to characterize fish assemblages in anticipation of dam removal on the Penobscot River, the largest river in Maine. We used boat electrofishing methods in both sampling designs. Multiple 500-m transects were selected randomly and electrofished in each of nine strata within the stratified random sampling design. Within the fixed-station design, up to 11 transects (1,000 m) were electrofished, all of which had been sampled previously. In total, 88 km of shoreline were electrofished during summer and fall in 2010 and 2011, and 45,874 individuals of 34 fish species were captured. Species-accumulation and dissimilarity curve analyses indicated that all sampling effort, other than fall 2011 under the fixed-station design, provided repeatable estimates of total species richness and proportional abundances. Overall, our sampling designs were similar in precision and efficiency for sampling fish assemblages. The fixed-station design was negatively biased for estimating the abundance of species such as Common Shiner Luxilus cornutus and Fallfish Semotilus corporalis and was positively biased for estimating biomass for species such as White Sucker Catostomus commersonii and Atlantic Salmon Salmo salar. However, we found no significant differences between the designs for proportional catch and biomass per unit effort, except in fall 2011. The difference observed in fall 2011 was due to limitations on the number and location of fixed sites that could be sampled, rather than an inherent bias within the design. Given the results from sampling in the Penobscot River, application of the stratified random design is preferable to the fixed-station design due to less potential for bias caused by varying sampling effort, such as what occurred in the fall 2011 fixed-station sample or due to purposeful site selection.

  19. Multi-saline sample distillation apparatus for hydrogen isotope analyses : design and accuracy

    USGS Publications Warehouse

    Hassan, Afifa Afifi

    1981-01-01

    A distillation apparatus for saline water samples was designed and tested. Six samples may be distilled simultaneously. The temperature was maintained at 400 C to ensure complete dehydration of the precipitating salts. Consequently, the error in the measured ratio of stable hydrogen isotopes resulting from incomplete dehydration of hydrated salts during distillation was eliminated. (USGS)

  20. A STRINGENT COMPARISON OF SAMPLING AND ANALYSIS METHODS FOR VOCS IN AMBIENT AIR

    EPA Science Inventory

    A carefully designed study was conducted during the summer of 1998 to simultaneously collect samples of ambient air by canisters and compare the analysis results to direct sorbent preconcentration results taken at the time of sample collection. A total of 32 1-h sample sets we...

  1. Statistical power and optimal design in experiments in which samples of participants respond to samples of stimuli.

    PubMed

    Westfall, Jacob; Kenny, David A; Judd, Charles M

    2014-10-01

    Researchers designing experiments in which a sample of participants responds to a sample of stimuli are faced with difficult questions about optimal study design. The conventional procedures of statistical power analysis fail to provide appropriate answers to these questions because they are based on statistical models in which stimuli are not assumed to be a source of random variation in the data, models that are inappropriate for experiments involving crossed random factors of participants and stimuli. In this article, we present new methods of power analysis for designs with crossed random factors, and we give detailed, practical guidance to psychology researchers planning experiments in which a sample of participants responds to a sample of stimuli. We extensively examine 5 commonly used experimental designs, describe how to estimate statistical power in each, and provide power analysis results based on a reasonable set of default parameter values. We then develop general conclusions and formulate rules of thumb concerning the optimal design of experiments in which a sample of participants responds to a sample of stimuli. We show that in crossed designs, statistical power typically does not approach unity as the number of participants goes to infinity but instead approaches a maximum attainable power value that is possibly small, depending on the stimulus sample. We also consider the statistical merits of designs involving multiple stimulus blocks. Finally, we provide a simple and flexible Web-based power application to aid researchers in planning studies with samples of stimuli.

  2. Generic particulate-monitoring system for retrofit to Hanford exhaust stacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Camman, J.W.; Carbaugh, E.H.

    1982-11-01

    Evaluations of 72 sampling and monitoring systems were performed at Hanford as the initial phase of a program to upgrade such systems. Each evaluation included determination of theoretical sampling efficiencies for particle sizes ranging from 0.5 to 10 micrometers aerodynamic equivalent diameter, addressing anisokinetic bias, sample transport line losses, and collector device efficiency. Upgrades needed to meet current Department of Energy guidance for effluent sampling and monitoring were identified, and a cost for each upgrade was estimated. A relative priority for each system's upgrade was then established based on evaluation results, current operational status, and future plans for the facilitymore » being exhausted. Common system upgrade requirements lead to the development of a generic design for common components of an exhaust stack sampling and monitoring system for airborne radioactive particulates. The generic design consists of commercially available off-the-shelf components to the extent practical and will simplify future stack sampling and monitoring system design, fabrication, and installation efforts. Evaluation results and their significance to system upgrades are empasized. A brief discussion of the analytical models used and experience to date with the upgrade program is included. Development of the generic stack sampling and monitoring system design is outlined. Generic system design features and limitations are presented. Requirements for generic system retrofitting to existing exhaust stacks are defined and benefits derived from generic system application are discussed.« less

  3. An Alternative View of Some FIA Sample Design and Analysis Issues

    Treesearch

    Paul C. Van Deusen

    2005-01-01

    Sample design and analysis decisions are the result of compromises and inputs from many sources. The end result would likely change if different individuals or groups were involved in the planning process. Discussed here are some alternatives to the procedures that are currently being used for the annual inventory. The purpose is to indicate that alternatives exist and...

  4. Mars Science Laboratory CHIMRA: A Device for Processing Powdered Martian Samples

    NASA Technical Reports Server (NTRS)

    Sunshine, Daniel

    2010-01-01

    The CHIMRA is an extraterrestrial sample acquisition and processing device for the Mars Science Laboratory that emphasizes robustness and adaptability through design configuration. This work reviews the guidelines utilized to invent the initial CHIMRA and the strategy employed in advancing the design; these principles will be discussed in relation to both the final CHIMRA design and similar future devices. The computational synthesis necessary to mature a boxed-in impact-generating mechanism will be presented alongside a detailed mechanism description. Results from the development testing required to advance the design for a highly-loaded, long-life and high-speed bearing application will be presented. Lessons learned during the assembly and testing of this subsystem as well as results and lessons from the sample-handling development test program will be reviewed.

  5. Precision, time, and cost: a comparison of three sampling designs in an emergency setting.

    PubMed

    Deitchler, Megan; Deconinck, Hedwig; Bergeron, Gilles

    2008-05-02

    The conventional method to collect data on the health, nutrition, and food security status of a population affected by an emergency is a 30 x 30 cluster survey. This sampling method can be time and resource intensive and, accordingly, may not be the most appropriate one when data are needed rapidly for decision making. In this study, we compare the precision, time and cost of the 30 x 30 cluster survey with two alternative sampling designs: a 33 x 6 cluster design (33 clusters, 6 observations per cluster) and a 67 x 3 cluster design (67 clusters, 3 observations per cluster). Data for each sampling design were collected concurrently in West Darfur, Sudan in September-October 2005 in an emergency setting. Results of the study show the 30 x 30 design to provide more precise results (i.e. narrower 95% confidence intervals) than the 33 x 6 and 67 x 3 design for most child-level indicators. Exceptions are indicators of immunization and vitamin A capsule supplementation coverage which show a high intra-cluster correlation. Although the 33 x 6 and 67 x 3 designs provide wider confidence intervals than the 30 x 30 design for child anthropometric indicators, the 33 x 6 and 67 x 3 designs provide the opportunity to conduct a LQAS hypothesis test to detect whether or not a critical threshold of global acute malnutrition prevalence has been exceeded, whereas the 30 x 30 design does not. For the household-level indicators tested in this study, the 67 x 3 design provides the most precise results. However, our results show that neither the 33 x 6 nor the 67 x 3 design are appropriate for assessing indicators of mortality. In this field application, data collection for the 33 x 6 and 67 x 3 designs required substantially less time and cost than that required for the 30 x 30 design. The findings of this study suggest the 33 x 6 and 67 x 3 designs can provide useful time- and resource-saving alternatives to the 30 x 30 method of data collection in emergency settings.

  6. Precision, time, and cost: a comparison of three sampling designs in an emergency setting

    PubMed Central

    Deitchler, Megan; Deconinck, Hedwig; Bergeron, Gilles

    2008-01-01

    The conventional method to collect data on the health, nutrition, and food security status of a population affected by an emergency is a 30 × 30 cluster survey. This sampling method can be time and resource intensive and, accordingly, may not be the most appropriate one when data are needed rapidly for decision making. In this study, we compare the precision, time and cost of the 30 × 30 cluster survey with two alternative sampling designs: a 33 × 6 cluster design (33 clusters, 6 observations per cluster) and a 67 × 3 cluster design (67 clusters, 3 observations per cluster). Data for each sampling design were collected concurrently in West Darfur, Sudan in September-October 2005 in an emergency setting. Results of the study show the 30 × 30 design to provide more precise results (i.e. narrower 95% confidence intervals) than the 33 × 6 and 67 × 3 design for most child-level indicators. Exceptions are indicators of immunization and vitamin A capsule supplementation coverage which show a high intra-cluster correlation. Although the 33 × 6 and 67 × 3 designs provide wider confidence intervals than the 30 × 30 design for child anthropometric indicators, the 33 × 6 and 67 × 3 designs provide the opportunity to conduct a LQAS hypothesis test to detect whether or not a critical threshold of global acute malnutrition prevalence has been exceeded, whereas the 30 × 30 design does not. For the household-level indicators tested in this study, the 67 × 3 design provides the most precise results. However, our results show that neither the 33 × 6 nor the 67 × 3 design are appropriate for assessing indicators of mortality. In this field application, data collection for the 33 × 6 and 67 × 3 designs required substantially less time and cost than that required for the 30 × 30 design. The findings of this study suggest the 33 × 6 and 67 × 3 designs can provide useful time- and resource-saving alternatives to the 30 × 30 method of data collection in emergency settings. PMID:18454866

  7. Selection within households in health surveys

    PubMed Central

    Alves, Maria Cecilia Goi Porto; Escuder, Maria Mercedes Loureiro; Claro, Rafael Moreira; da Silva, Nilza Nunes

    2014-01-01

    OBJECTIVE To compare the efficiency and accuracy of sampling designs including and excluding the sampling of individuals within sampled households in health surveys. METHODS From a population survey conducted in Baixada Santista Metropolitan Area, SP, Southeastern Brazil, lowlands between 2006 and 2007, 1,000 samples were drawn for each design and estimates for people aged 18 to 59 and 18 and over were calculated for each sample. In the first design, 40 census tracts, 12 households per sector, and one person per household were sampled. In the second, no sampling within the household was performed and 40 census sectors and 6 households for the 18 to 59-year old group and 5 or 6 for the 18 and over age group or more were sampled. Precision and bias of proportion estimates for 11 indicators were assessed in the two final sets of the 1000 selected samples with the two types of design. They were compared by means of relative measurements: coefficient of variation, bias/mean ratio, bias/standard error ratio, and relative mean square error. Comparison of costs contrasted basic cost per person, household cost, number of people, and households. RESULTS Bias was found to be negligible for both designs. A lower precision was found in the design including individuals sampling within households, and the costs were higher. CONCLUSIONS The design excluding individual sampling achieved higher levels of efficiency and accuracy and, accordingly, should be first choice for investigators. Sampling of household dwellers should be adopted when there are reasons related to the study subject that may lead to bias in individual responses if multiple dwellers answer the proposed questionnaire. PMID:24789641

  8. An analysis of adaptive design variations on the sequential parallel comparison design for clinical trials

    PubMed Central

    Mi, Michael Y.; Betensky, Rebecca A.

    2013-01-01

    Background Currently, a growing placebo response rate has been observed in clinical trials for antidepressant drugs, a phenomenon that has made it increasingly difficult to demonstrate efficacy. The sequential parallel comparison design (SPCD) is a clinical trial design that was proposed to address this issue. The SPCD theoretically has the potential to reduce the sample size requirement for a clinical trial and to simultaneously enrich the study population to be less responsive to the placebo. Purpose Because the basic SPCD design already reduces the placebo response by removing placebo responders between the first and second phases of a trial, the purpose of this study was to examine whether we can further improve the efficiency of the basic SPCD and if we can do so when the projected underlying drug and placebo response rates differ considerably from the actual ones. Methods Three adaptive designs that used interim analyses to readjust the length of study duration for individual patients were tested to reduce the sample size requirement or increase the statistical power of the SPCD. Various simulations of clinical trials using the SPCD with interim analyses were conducted to test these designs through calculations of empirical power. Results From the simulations, we found that the adaptive designs can recover unnecessary resources spent in the traditional SPCD trial format with overestimated initial sample sizes and provide moderate gains in power. Under the first design, results showed up to a 25% reduction in person-days, with most power losses below 5%. In the second design, results showed up to a 8% reduction in person-days with negligible loss of power. In the third design using sample size re-estimation, up to 25% power was recovered from underestimated sample size scenarios. Limitations Given the numerous possible test parameters that could have been chosen for the simulations, the study’s results are limited to situations described by the parameters that were used, and may not generalize to all possible scenarios. Furthermore, drop-out of patients is not considered in this study. Conclusions It is possible to make an already complex design such as the SPCD adaptive, and thus more efficient, potentially overcoming the problem of placebo response at lower cost. Ultimately, such a design may expedite the approval of future effective treatments. PMID:23283576

  9. Sampling designs matching species biology produce accurate and affordable abundance indices

    PubMed Central

    Farley, Sean; Russell, Gareth J.; Butler, Matthew J.; Selinger, Jeff

    2013-01-01

    Wildlife biologists often use grid-based designs to sample animals and generate abundance estimates. Although sampling in grids is theoretically sound, in application, the method can be logistically difficult and expensive when sampling elusive species inhabiting extensive areas. These factors make it challenging to sample animals and meet the statistical assumption of all individuals having an equal probability of capture. Violating this assumption biases results. Does an alternative exist? Perhaps by sampling only where resources attract animals (i.e., targeted sampling), it would provide accurate abundance estimates more efficiently and affordably. However, biases from this approach would also arise if individuals have an unequal probability of capture, especially if some failed to visit the sampling area. Since most biological programs are resource limited, and acquiring abundance data drives many conservation and management applications, it becomes imperative to identify economical and informative sampling designs. Therefore, we evaluated abundance estimates generated from grid and targeted sampling designs using simulations based on geographic positioning system (GPS) data from 42 Alaskan brown bears (Ursus arctos). Migratory salmon drew brown bears from the wider landscape, concentrating them at anadromous streams. This provided a scenario for testing the targeted approach. Grid and targeted sampling varied by trap amount, location (traps placed randomly, systematically or by expert opinion), and traps stationary or moved between capture sessions. We began by identifying when to sample, and if bears had equal probability of capture. We compared abundance estimates against seven criteria: bias, precision, accuracy, effort, plus encounter rates, and probabilities of capture and recapture. One grid (49 km2 cells) and one targeted configuration provided the most accurate results. Both placed traps by expert opinion and moved traps between capture sessions, which raised capture probabilities. The grid design was least biased (−10.5%), but imprecise (CV 21.2%), and used most effort (16,100 trap-nights). The targeted configuration was more biased (−17.3%), but most precise (CV 12.3%), with least effort (7,000 trap-nights). Targeted sampling generated encounter rates four times higher, and capture and recapture probabilities 11% and 60% higher than grid sampling, in a sampling frame 88% smaller. Bears had unequal probability of capture with both sampling designs, partly because some bears never had traps available to sample them. Hence, grid and targeted sampling generated abundance indices, not estimates. Overall, targeted sampling provided the most accurate and affordable design to index abundance. Targeted sampling may offer an alternative method to index the abundance of other species inhabiting expansive and inaccessible landscapes elsewhere, provided their attraction to resource concentrations. PMID:24392290

  10. 76 FR 12960 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-09

    ... population to which generalizations will be made, the sampling frame, the sample design (including... for submission for other generic mechanisms that are designed to yield quantitative results. The... generic clearance for qualitative information will not be used for quantitative information collections...

  11. Sampling Lesbian, Gay, and Bisexual Populations

    ERIC Educational Resources Information Center

    Meyer, Ilan H.; Wilson, Patrick A.

    2009-01-01

    Sampling has been the single most influential component of conducting research with lesbian, gay, and bisexual (LGB) populations. Poor sampling designs can result in biased results that will mislead other researchers, policymakers, and practitioners. Investigators wishing to study LGB populations must therefore devote significant energy and…

  12. Real time flight simulation methodology

    NASA Technical Reports Server (NTRS)

    Parrish, E. A.; Cook, G.; Mcvey, E. S.

    1976-01-01

    An example sensitivity study is presented to demonstrate how a digital autopilot designer could make a decision on minimum sampling rate for computer specification. It consists of comparing the simulated step response of an existing analog autopilot and its associated aircraft dynamics to the digital version operating at various sampling frequencies and specifying a sampling frequency that results in an acceptable change in relative stability. In general, the zero order hold introduces phase lag which will increase overshoot and settling time. It should be noted that this solution is for substituting a digital autopilot for a continuous autopilot. A complete redesign could result in results which more closely resemble the continuous results or which conform better to original design goals.

  13. Methods for sample size determination in cluster randomized trials

    PubMed Central

    Rutterford, Clare; Copas, Andrew; Eldridge, Sandra

    2015-01-01

    Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515

  14. Blinded sample size re-estimation in three-arm trials with 'gold standard' design.

    PubMed

    Mütze, Tobias; Friede, Tim

    2017-10-15

    In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  15. LDEF materials results for spacecraft applications: Executive summary

    NASA Astrophysics Data System (ADS)

    Whitaker, A. F.; Dooling, D.

    1995-03-01

    To address the challenges of space environmental effects, NASA designed the Long Duration Exposure Facility (LDEF) for an 18-month mission to expose thousands of samples of candidate materials that might be used on a space station or other orbital spacecraft. LDEF was launched in April 1984 and was to have been returned to Earth in 1985. Changes in mission schedules postponed retrieval until January 1990, after 69 months in orbit. Analyses of the samples recovered from LDEF have provided spacecraft designers and managers with the most extensive data base on space materials phenomena. Many LDEF samples were greatly changed by extended space exposure. Among even the most radially altered samples, NASA and its science teams are finding a wealth of surprising conclusions and tantalizing clues about the effects of space on materials. Many were discussed at the first two LDEF results conferences and subsequent professional papers. The LDEF Materials Results for Spacecraft Applications Conference was convened in Huntsville to discuss implications for spacecraft design. Already, paint and thermal blanket selections for space station and other spacecraft have been affected by LDEF data. This volume synopsizes those results.

  16. LDEF materials results for spacecraft applications: Executive summary

    NASA Technical Reports Server (NTRS)

    Whitaker, A. F. (Compiler); Dooling, D. (Compiler)

    1995-01-01

    To address the challenges of space environmental effects, NASA designed the Long Duration Exposure Facility (LDEF) for an 18-month mission to expose thousands of samples of candidate materials that might be used on a space station or other orbital spacecraft. LDEF was launched in April 1984 and was to have been returned to Earth in 1985. Changes in mission schedules postponed retrieval until January 1990, after 69 months in orbit. Analyses of the samples recovered from LDEF have provided spacecraft designers and managers with the most extensive data base on space materials phenomena. Many LDEF samples were greatly changed by extended space exposure. Among even the most radially altered samples, NASA and its science teams are finding a wealth of surprising conclusions and tantalizing clues about the effects of space on materials. Many were discussed at the first two LDEF results conferences and subsequent professional papers. The LDEF Materials Results for Spacecraft Applications Conference was convened in Huntsville to discuss implications for spacecraft design. Already, paint and thermal blanket selections for space station and other spacecraft have been affected by LDEF data. This volume synopsizes those results.

  17. The MISSE 7 Flexural Stress Effects Experiment After 1.5 Years of Wake Space Exposure

    NASA Technical Reports Server (NTRS)

    Snow, Kate E.; De Groh, Kim K.; Banks, Bruce A.

    2017-01-01

    Low Earth orbit space environment conditions, including ultraviolet radiation, thermal cycling, and atomic oxygen exposure, can cause degradation of exterior spacecraft materials over time. Radiation and thermal exposure often results in bond- breaking and embrittlement of polymers, reducing mechanical strength and structural integrity. An experiment called the Flexural Stress Effects Experiment (FSEE) was flown with the objective of determining the role of space environmental exposure on the degradation of polymers under flexural stress. The FSEE samples were flown in the wake orientation on the exterior of International Space Station for 1.5 years. Twenty-four samples were flown: 12 bent over a 0.375 in. mandrel and 12 were over a 0.25 in. mandrel. This was designed to simulate flight configurations of insulation blankets on spacecraft. The samples consisted of assorted polyimide and fluorinated polymers with various coatings. Half the samples were designated for bend testing and the other half will be tensile tested. A non-standard bend-test procedure was designed to determine the surface strain at which embrittled polymers crack. All ten samples designated for bend testing have been tested. None of the control samples' polymers cracked, even under surface strains up to 19.7%, although one coating cracked. Of the ten flight samples tested, seven show increased embrittlement through bend-test induced cracking at surface strains from 0.70%to 11.73%. These results show that most of the tested polymers are embrittled due to space exposure, when compared to their control samples. Determination of the extent of space induced embrittlement of polymers is important for designing durable spacecraft.

  18. A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.

    PubMed

    Yu, Qingzhao; Zhu, Lin; Zhu, Han

    2017-11-01

    Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Cost-efficient designs for three-arm trials with treatment delivered by health professionals: Sample sizes for a combination of nested and crossed designs

    PubMed Central

    Moerbeek, Mirjam

    2018-01-01

    Background This article studies the design of trials that compare three treatment conditions that are delivered by two types of health professionals. The one type of health professional delivers one treatment, and the other type delivers two treatments, hence, this design is a combination of a nested and crossed design. As each health professional treats multiple patients, the data have a nested structure. This nested structure has thus far been ignored in the design of such trials, which may result in an underestimate of the required sample size. In the design stage, the sample sizes should be determined such that a desired power is achieved for each of the three pairwise comparisons, while keeping costs or sample size at a minimum. Methods The statistical model that relates outcome to treatment condition and explicitly takes the nested data structure into account is presented. Mathematical expressions that relate sample size to power are derived for each of the three pairwise comparisons on the basis of this model. The cost-efficient design achieves sufficient power for each pairwise comparison at lowest costs. Alternatively, one may minimize the total number of patients. The sample sizes are found numerically and an Internet application is available for this purpose. The design is also compared to a nested design in which each health professional delivers just one treatment. Results Mathematical expressions show that this design is more efficient than the nested design. For each pairwise comparison, power increases with the number of health professionals and the number of patients per health professional. The methodology of finding a cost-efficient design is illustrated using a trial that compares treatments for social phobia. The optimal sample sizes reflect the costs for training and supervising psychologists and psychiatrists, and the patient-level costs in the three treatment conditions. Conclusion This article provides the methodology for designing trials that compare three treatment conditions while taking the nesting of patients within health professionals into account. As such, it helps to avoid underpowered trials. To use the methodology, a priori estimates of the total outcome variances and intraclass correlation coefficients must be obtained from experts’ opinions or findings in the literature. PMID:29316807

  20. Comparison of sampling designs for estimating deforestation from landsat TM and MODIS imagery: a case study in Mato Grosso, Brazil.

    PubMed

    Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin

    2014-01-01

    Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.

  1. Comparison of Sampling Designs for Estimating Deforestation from Landsat TM and MODIS Imagery: A Case Study in Mato Grosso, Brazil

    PubMed Central

    Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin

    2014-01-01

    Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block. PMID:25258742

  2. Report: Independent Environmental Sampling Shows Some Properties Designated by EPA as Available for Use Had Some Contamination

    EPA Pesticide Factsheets

    Report #15-P-0221, July 21, 2015. Some OIG sampling results showed contamination was still present at sites designated by the EPA as ready for reuse. This was unexpected and could signal a need to implement changes to ensure human health protection.

  3. 77 FR 26292 - Risk Evaluation and Mitigation Strategy Assessments: Social Science Methodologies to Assess Goals...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-03

    ... determine endpoints; questionnaire design and analyses; and presentation of survey results. To date, FDA has..., the workshop will invest considerable time in identifying best methodological practices for conducting... sample, sample size, question design, process, and endpoints. Panel 2 will focus on alternatives to...

  4. A multi-stage drop-the-losers design for multi-arm clinical trials.

    PubMed

    Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher

    2017-02-01

    Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.

  5. Effect of Study Design on Sample Size in Studies Intended to Evaluate Bioequivalence of Inhaled Short‐Acting β‐Agonist Formulations

    PubMed Central

    Zeng, Yaohui; Singh, Sachinkumar; Wang, Kai

    2017-01-01

    Abstract Pharmacodynamic studies that use methacholine challenge to assess bioequivalence of generic and innovator albuterol formulations are generally designed per published Food and Drug Administration guidance, with 3 reference doses and 1 test dose (3‐by‐1 design). These studies are challenging and expensive to conduct, typically requiring large sample sizes. We proposed 14 modified study designs as alternatives to the Food and Drug Administration–recommended 3‐by‐1 design, hypothesizing that adding reference and/or test doses would reduce sample size and cost. We used Monte Carlo simulation to estimate sample size. Simulation inputs were selected based on published studies and our own experience with this type of trial. We also estimated effects of these modified study designs on study cost. Most of these altered designs reduced sample size and cost relative to the 3‐by‐1 design, some decreasing cost by more than 40%. The most effective single study dose to add was 180 μg of test formulation, which resulted in an estimated 30% relative cost reduction. Adding a single test dose of 90 μg was less effective, producing only a 13% cost reduction. Adding a lone reference dose of either 180, 270, or 360 μg yielded little benefit (less than 10% cost reduction), whereas adding 720 μg resulted in a 19% cost reduction. Of the 14 study design modifications we evaluated, the most effective was addition of both a 90‐μg test dose and a 720‐μg reference dose (42% cost reduction). Combining a 180‐μg test dose and a 720‐μg reference dose produced an estimated 36% cost reduction. PMID:29281130

  6. Optimized Design and Analysis of Sparse-Sampling fMRI Experiments

    PubMed Central

    Perrachione, Tyler K.; Ghosh, Satrajit S.

    2013-01-01

    Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase the number of samples and improve statistical power. PMID:23616742

  7. Optimized design and analysis of sparse-sampling FMRI experiments.

    PubMed

    Perrachione, Tyler K; Ghosh, Satrajit S

    2013-01-01

    Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase the number of samples and improve statistical power.

  8. Putting the 1991 census sample of anonymised records on your Unix workstation.

    PubMed

    Turton, I; Openshaw, S

    1995-03-01

    "The authors describe the development of a customised computer software package for easing the analysis of the U.K. 1991 Sample of Anonymised Records. The resulting USAR [Unix Sample of Anonymised Records] package is designed to be portable within the Unix environment. It offers a number of features such as interactive table design, intelligent data interpretation, and fuzzy query. An example of SAR analysis is provided." excerpt

  9. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments

    PubMed Central

    2013-01-01

    Background Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. Results To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations. The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. Conclusions We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs. PMID:24160725

  10. Optimal Digital Controller Design for a Servo Motor Taking Account of Intersample Behavior

    NASA Astrophysics Data System (ADS)

    Akiyoshi, Tatsuro; Imai, Jun; Funabiki, Shigeyuki

    A continuous-time plant with discretized continuous-time controller do not yield stability if the sampling rate is lower than some certain level. Thus far, high functioning electronic control has made use of high cost hardwares which are needed to implement discretized continuous-time controllers, while low cost hardwares generally do not have high enough sampling rate. This technical note presents results comparing performance indices with and without intersample behavior, and some answer to the question how a low specification device can control a plant effectively. We consider a machine simulating wafer handling robots at semiconductor factories, which is an electromechanical system driven by a direct drive motor. We illustrate controller design for the robot with and without intersample behavior, and simulations and experimental results by using these controllers. Taking intersample behavior into account proves to be effective to make control performance better and enables it to choose relatively long sampling period. By controller design via performance index with intersample behavior, we can cope with situation where short enough sampling period may not be employed, and freedom of controller design might be widened especially on choice of sampling period.

  11. Determining Optimal Location and Numbers of Sample Transects for Characterization of UXO Sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BILISOLY, ROGER L.; MCKENNA, SEAN A.

    2003-01-01

    Previous work on sample design has been focused on constructing designs for samples taken at point locations. Significantly less work has been done on sample design for data collected along transects. A review of approaches to point and transect sampling design shows that transects can be considered as a sequential set of point samples. Any two sampling designs can be compared through using each one to predict the value of the quantity being measured on a fixed reference grid. The quality of a design is quantified in two ways: computing either the sum or the product of the eigenvalues ofmore » the variance matrix of the prediction error. An important aspect of this analysis is that the reduction of the mean prediction error variance (MPEV) can be calculated for any proposed sample design, including one with straight and/or meandering transects, prior to taking those samples. This reduction in variance can be used as a ''stopping rule'' to determine when enough transect sampling has been completed on the site. Two approaches for the optimization of the transect locations are presented. The first minimizes the sum of the eigenvalues of the predictive error, and the second minimizes the product of these eigenvalues. Simulated annealing is used to identify transect locations that meet either of these objectives. This algorithm is applied to a hypothetical site to determine the optimal locations of two iterations of meandering transects given a previously existing straight transect. The MPEV calculation is also used on both a hypothetical site and on data collected at the Isleta Pueblo to evaluate its potential as a stopping rule. Results show that three or four rounds of systematic sampling with straight parallel transects covering 30 percent or less of the site, can reduce the initial MPEV by as much as 90 percent. The amount of reduction in MPEV can be used as a stopping rule, but the relationship between MPEV and the results of excavation versus no-further-action decisions is site specific and cannot be calculated prior to the sampling. It may be advantageous to use the reduction in MPEV as a stopping rule for systematic sampling across the site that can then be followed by focused sampling in areas identified has having UXO during the systematic sampling. The techniques presented here provide answers to the questions of ''Where to sample?'' and ''When to stop?'' and are capable of running in near real time to support iterative site characterization campaigns.« less

  12. Exploring effective sampling design for monitoring soil organic carbon in degraded Tibetan grasslands.

    PubMed

    Chang, Xiaofeng; Bao, Xiaoying; Wang, Shiping; Zhu, Xiaoxue; Luo, Caiyun; Zhang, Zhenhua; Wilkes, Andreas

    2016-05-15

    The effects of climate change and human activities on grassland degradation and soil carbon stocks have become a focus of both research and policy. However, lack of research on appropriate sampling design prevents accurate assessment of soil carbon stocks and stock changes at community and regional scales. Here, we conducted an intensive survey with 1196 sampling sites over an area of 190 km(2) of degraded alpine meadow. Compared to lightly degraded meadow, soil organic carbon (SOC) stocks in moderately, heavily and extremely degraded meadow were reduced by 11.0%, 13.5% and 17.9%, respectively. Our field survey sampling design was overly intensive to estimate SOC status with a tolerable uncertainty of 10%. Power analysis showed that the optimal sampling density to achieve the desired accuracy would be 2, 3, 5 and 7 sites per 10 km(2) for lightly, moderately, heavily and extremely degraded meadows, respectively. If a subsequent paired sampling design with the optimum sample size were performed, assuming stock change rates predicted by experimental and modeling results, we estimate that about 5-10 years would be necessary to detect expected trends in SOC in the top 20 cm soil layer. Our results highlight the utility of conducting preliminary surveys to estimate the appropriate sampling density and avoid wasting resources due to over-sampling, and to estimate the sampling interval required to detect an expected sequestration rate. Future studies will be needed to evaluate spatial and temporal patterns of SOC variability. Copyright © 2016. Published by Elsevier Ltd.

  13. Sample size determination for mediation analysis of longitudinal data.

    PubMed

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  14. White blood cell counting system

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The design, fabrication, and tests of a prototype white blood cell counting system for use in the Skylab IMSS are presented. The counting system consists of a sample collection subsystem, sample dilution and fluid containment subsystem, and a cell counter. Preliminary test results show the sample collection and the dilution subsystems are functional and fulfill design goals. Results for the fluid containment subsystem show the handling bags cause counting errors due to: (1) adsorption of cells to the walls of the container, and (2) inadequate cleaning of the plastic bag material before fabrication. It was recommended that another bag material be selected.

  15. Impact of spatial variability and sampling design on model performance

    NASA Astrophysics Data System (ADS)

    Schrape, Charlotte; Schneider, Anne-Kathrin; Schröder, Boris; van Schaik, Loes

    2017-04-01

    Many environmental physical and chemical parameters as well as species distributions display a spatial variability at different scales. In case measurements are very costly in labour time or money a choice has to be made between a high sampling resolution at small scales and a low spatial cover of the study area or a lower sampling resolution at the small scales resulting in local data uncertainties with a better spatial cover of the whole area. This dilemma is often faced in the design of field sampling campaigns for large scale studies. When the gathered field data are subsequently used for modelling purposes the choice of sampling design and resulting data quality influence the model performance criteria. We studied this influence with a virtual model study based on a large dataset of field information on spatial variation of earthworms at different scales. Therefore we built a virtual map of anecic earthworm distributions over the Weiherbach catchment (Baden-Württemberg in Germany). First of all the field scale abundance of earthworms was estimated using a catchment scale model based on 65 field measurements. Subsequently the high small scale variability was added using semi-variograms, based on five fields with a total of 430 measurements divided in a spatially nested sampling design over these fields, to estimate the nugget, range and standard deviation of measurements within the fields. With the produced maps, we performed virtual samplings of one up to 50 random points per field. We then used these data to rebuild the catchment scale models of anecic earthworm abundance with the same model parameters as in the work by Palm et al. (2013). The results of the models show clearly that a large part of the non-explained deviance of the models is due to the very high small scale variability in earthworm abundance: the models based on single virtual sampling points on average obtain an explained deviance of 0.20 and a correlation coefficient of 0.64. With increasing sampling points per field, we averaged the measured abundance of the sampling within each field to obtain a more representative value of the field average. Doubling the samplings per field strongly improved the model performance criteria (explained deviance 0.38 and correlation coefficient 0.73). With 50 sampling points per field the performance criteria were 0.91 and 0.97 respectively for explained deviance and correlation coefficient. The relationship between number of samplings and performance criteria can be described with a saturation curve. Beyond five samples per field the model improvement becomes rather small. With this contribution we wish to discuss the impact of data variability at sampling scale on model performance and the implications for sampling design and assessment of model results as well as ecological inferences.

  16. Methodology for back-contamination risk assessment for a Mars sample return mission

    NASA Technical Reports Server (NTRS)

    Merkhofer, M. W.; Quinn, D. J.

    1977-01-01

    The risk of back-contamination from Mars Surface Sample Return (MSSR) missions is assessed. The methodology is designed to provide an assessment of the probability that a given mission design and strategy will result in accidental release of Martian organisms acquired as a result of MSSR. This is accomplished through the construction of risk models describing the mission risk elements and their impact on back-contamination probability. A conceptual framework is presented for using the risk model to evaluate mission design decisions that require a trade-off between science and planetary protection considerations.

  17. Network Sampling with Memory: A proposal for more efficient sampling from social networks.

    PubMed

    Mouw, Ted; Verdery, Ashton M

    2012-08-01

    Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)-the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a "List" mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a "Search" mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS.

  18. Network Sampling with Memory: A proposal for more efficient sampling from social networks

    PubMed Central

    Mouw, Ted; Verdery, Ashton M.

    2013-01-01

    Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)—the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a “List” mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a “Search” mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS. PMID:24159246

  19. [The design and application of domestic mid-IR fiber optics].

    PubMed

    Weng, Shi-fu; Gao, Jian-ping; Xu, Yi-zhuang; Yang, Li-min; Bian, Bei-ya; Xiang, Hai-bo; Wu, Jin-guang

    2004-05-01

    The combination of mid-IR fiber optics and FTIR has made the non-invasive determination of samples in situ, with long distances, and in vivo possible. In this paper domestic mid-IR fiber optics was improved to investigate the transmission ability of fiber optics and its application to the sample determination. New design was applied to obtaining one bare fiber optics, which has a minor energy loss and higher signal-to-noise ratio. The spectra of H2O/EtOH and tissue samples were measured using the new designed fiber optics and the results show that home-made mid-IR fiber optics can be applied to the field of determination of general and biological samples.

  20. The perils of ignoring design effects in experimental studies: lessons from a mammography screening trial.

    PubMed

    Glenn, Beth A; Bastani, Roshan; Maxwell, Annette E

    2013-01-01

    Threats to external validity, including pretest sensitisation and the interaction of selection and an intervention, are frequently overlooked by researchers despite their potential to significantly influence study outcomes. The purpose of this investigation was to conduct secondary data analyses to assess the presence of external validity threats in the setting of a randomised trial designed to promote mammography use in a high-risk sample of women. During the trial, recruitment and intervention, implementation took place in three cohorts (with different ethnic composition), utilising two different designs (pretest-posttest control group design and posttest only control group design). Results reveal that the intervention produced different outcomes across cohorts, dependent upon the research design used and the characteristics of the sample. These results illustrate the importance of weighing the pros and cons of potential research designs before making a selection and attending more closely to issues of external validity.

  1. Sensitive Metamaterial Sensor for Distinction of Authentic and Inauthentic Fuel Samples

    NASA Astrophysics Data System (ADS)

    Tümkaya, Mehmet Ali; Dinçer, Furkan; Karaaslan, Muharrem; Sabah, Cumali

    2017-08-01

    A metamaterial-based sensor has been realized to distinguish authentic and inauthentic fuel samples in the microwave frequency regime. Unlike the many studies in literature on metamaterial-based sensor applications, this study focuses on a compact metamaterial-based sensor operating in the X-band frequency range. Firstly, electromagnetic properties of authentic and inauthentic fuel samples were obtained experimentally in a laboratory environment. Secondly, these experimental results were used to design and create a highly efficient metamaterial-based sensor with easy fabrication characteristics and simple design structure. The experimental results for the sensor were in good agreement with the numerical ones. The proposed sensor offers a more efficient design and can be used to detect fuel and multiple other liquids in various application fields from medical to military areas in several frequency regimes.

  2. A general unified framework to assess the sampling variance of heritability estimates using pedigree or marker-based relationships.

    PubMed

    Visscher, Peter M; Goddard, Michael E

    2015-01-01

    Heritability is a population parameter of importance in evolution, plant and animal breeding, and human medical genetics. It can be estimated using pedigree designs and, more recently, using relationships estimated from markers. We derive the sampling variance of the estimate of heritability for a wide range of experimental designs, assuming that estimation is by maximum likelihood and that the resemblance between relatives is solely due to additive genetic variation. We show that well-known results for balanced designs are special cases of a more general unified framework. For pedigree designs, the sampling variance is inversely proportional to the variance of relationship in the pedigree and it is proportional to 1/N, whereas for population samples it is approximately proportional to 1/N(2), where N is the sample size. Variation in relatedness is a key parameter in the quantification of the sampling variance of heritability. Consequently, the sampling variance is high for populations with large recent effective population size (e.g., humans) because this causes low variation in relationship. However, even using human population samples, low sampling variance is possible with high N. Copyright © 2015 by the Genetics Society of America.

  3. Critical appraisal of arguments for the delayed-start design proposed as alternative to the parallel-group randomized clinical trial design in the field of rare disease.

    PubMed

    Spineli, Loukia M; Jenz, Eva; Großhennig, Anika; Koch, Armin

    2017-08-17

    A number of papers have proposed or evaluated the delayed-start design as an alternative to the standard two-arm parallel group randomized clinical trial (RCT) design in the field of rare disease. However the discussion is felt to lack a sufficient degree of consideration devoted to the true virtues of the delayed start design and the implications either in terms of required sample-size, overall information, or interpretation of the estimate in the context of small populations. To evaluate whether there are real advantages of the delayed-start design particularly in terms of overall efficacy and sample size requirements as a proposed alternative to the standard parallel group RCT in the field of rare disease. We used a real-life example to compare the delayed-start design with the standard RCT in terms of sample size requirements. Then, based on three scenarios regarding the development of the treatment effect over time, the advantages, limitations and potential costs of the delayed-start design are discussed. We clarify that delayed-start design is not suitable for drugs that establish an immediate treatment effect, but for drugs with effects developing over time, instead. In addition, the sample size will always increase as an implication for a reduced time on placebo resulting in a decreased treatment effect. A number of papers have repeated well-known arguments to justify the delayed-start design as appropriate alternative to the standard parallel group RCT in the field of rare disease and do not discuss the specific needs of research methodology in this field. The main point is that a limited time on placebo will result in an underestimated treatment effect and, in consequence, in larger sample size requirements compared to those expected under a standard parallel-group design. This also impacts on benefit-risk assessment.

  4. Sample size re-assessment leading to a raised sample size does not inflate type I error rate under mild conditions.

    PubMed

    Broberg, Per

    2013-07-19

    One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.

  5. Design and Practices for Use of Automated Drilling and Sample Handling in MARTE While Minimizing Terrestrial and Cross Contamination

    NASA Astrophysics Data System (ADS)

    Miller, David P.; Bonaccorsi, Rosalba; Davis, Kiel

    2008-10-01

    Mars Astrobiology Research and Technology Experiment (MARTE) investigators used an automated drill and sample processing hardware to detect and categorize life-forms found in subsurface rock at Río Tinto, Spain. For the science to be successful, it was necessary for the biomass from other sources -- whether from previously processed samples (cross contamination) or the terrestrial environment (forward contamination) -- to be insignificant. The hardware and practices used in MARTE were designed around this problem. Here, we describe some of the design issues that were faced and classify them into problems that are unique to terrestrial tests versus problems that would also exist for a system that was flown to Mars. Assessment of the biomass at various stages in the sample handling process revealed mixed results; the instrument design seemed to minimize cross contamination, but contamination from the surrounding environment sometimes made its way onto the surface of samples. Techniques used during the MARTE Río Tinto project, such as facing the sample, appear to remove this environmental contamination without introducing significant cross contamination from previous samples.

  6. Design and practices for use of automated drilling and sample handling in MARTE while minimizing terrestrial and cross contamination.

    PubMed

    Miller, David P; Bonaccorsi, Rosalba; Davis, Kiel

    2008-10-01

    Mars Astrobiology Research and Technology Experiment (MARTE) investigators used an automated drill and sample processing hardware to detect and categorize life-forms found in subsurface rock at Río Tinto, Spain. For the science to be successful, it was necessary for the biomass from other sources--whether from previously processed samples (cross contamination) or the terrestrial environment (forward contamination)-to be insignificant. The hardware and practices used in MARTE were designed around this problem. Here, we describe some of the design issues that were faced and classify them into problems that are unique to terrestrial tests versus problems that would also exist for a system that was flown to Mars. Assessment of the biomass at various stages in the sample handling process revealed mixed results; the instrument design seemed to minimize cross contamination, but contamination from the surrounding environment sometimes made its way onto the surface of samples. Techniques used during the MARTE Río Tinto project, such as facing the sample, appear to remove this environmental contamination without introducing significant cross contamination from previous samples.

  7. Outcome-Dependent Sampling with Interval-Censored Failure Time Data

    PubMed Central

    Zhou, Qingning; Cai, Jianwen; Zhou, Haibo

    2017-01-01

    Summary Epidemiologic studies and disease prevention trials often seek to relate an exposure variable to a failure time that suffers from interval-censoring. When the failure rate is low and the time intervals are wide, a large cohort is often required so as to yield reliable precision on the exposure-failure-time relationship. However, large cohort studies with simple random sampling could be prohibitive for investigators with a limited budget, especially when the exposure variables are expensive to obtain. Alternative cost-effective sampling designs and inference procedures are therefore desirable. We propose an outcome-dependent sampling (ODS) design with interval-censored failure time data, where we enrich the observed sample by selectively including certain more informative failure subjects. We develop a novel sieve semiparametric maximum empirical likelihood approach for fitting the proportional hazards model to data from the proposed interval-censoring ODS design. This approach employs the empirical likelihood and sieve methods to deal with the infinite-dimensional nuisance parameters, which greatly reduces the dimensionality of the estimation problem and eases the computation difficulty. The consistency and asymptotic normality of the resulting regression parameter estimator are established. The results from our extensive simulation study show that the proposed design and method works well for practical situations and is more efficient than the alternative designs and competing approaches. An example from the Atherosclerosis Risk in Communities (ARIC) study is provided for illustration. PMID:28771664

  8. Effect of uncertainties on probabilistic-based design capacity of hydrosystems

    NASA Astrophysics Data System (ADS)

    Tung, Yeou-Koung

    2018-02-01

    Hydrosystems engineering designs involve analysis of hydrometric data (e.g., rainfall, floods) and use of hydrologic/hydraulic models, all of which contribute various degrees of uncertainty to the design process. Uncertainties in hydrosystem designs can be generally categorized into aleatory and epistemic types. The former arises from the natural randomness of hydrologic processes whereas the latter are due to knowledge deficiency in model formulation and model parameter specification. This study shows that the presence of epistemic uncertainties induces uncertainty in determining the design capacity. Hence, the designer needs to quantify the uncertainty features of design capacity to determine the capacity with a stipulated performance reliability under the design condition. Using detention basin design as an example, the study illustrates a methodological framework by considering aleatory uncertainty from rainfall and epistemic uncertainties from the runoff coefficient, curve number, and sampling error in design rainfall magnitude. The effects of including different items of uncertainty and performance reliability on the design detention capacity are examined. A numerical example shows that the mean value of the design capacity of the detention basin increases with the design return period and this relation is found to be practically the same regardless of the uncertainty types considered. The standard deviation associated with the design capacity, when subject to epistemic uncertainty, increases with both design frequency and items of epistemic uncertainty involved. It is found that the epistemic uncertainty due to sampling error in rainfall quantiles should not be ignored. Even with a sample size of 80 (relatively large for a hydrologic application) the inclusion of sampling error in rainfall quantiles resulted in a standard deviation about 2.5 times higher than that considering only the uncertainty of the runoff coefficient and curve number. Furthermore, the presence of epistemic uncertainties in the design would result in under-estimation of the annual failure probability of the hydrosystem and has a discounting effect on the anticipated design return period.

  9. Testing how voluntary participation requirements in an environmental study affect the planned random sample design outcomes: implications for the predictions of values and their uncertainty.

    NASA Astrophysics Data System (ADS)

    Ander, Louise; Lark, Murray; Smedley, Pauline; Watts, Michael; Hamilton, Elliott; Fletcher, Tony; Crabbe, Helen; Close, Rebecca; Studden, Mike; Leonardi, Giovanni

    2015-04-01

    Random sampling design is optimal in order to be able to assess outcomes, such as the mean of a given variable across an area. However, this optimal sampling design may be compromised to an unknown extent by unavoidable real-world factors: the extent to which the study design can still be considered random, and the influence this may have on the choice of appropriate statistical data analysis is examined in this work. We take a study which relied on voluntary participation for the sampling of private water tap chemical composition in England, UK. This study was designed and implemented as a categorical, randomised study. The local geological classes were grouped into 10 types, which were considered to be most important in likely effects on groundwater chemistry (the source of all the tap waters sampled). Locations of the users of private water supplies were made available to the study group from the Local Authority in the area. These were then assigned, based on location, to geological groups 1 to 10 and randomised within each group. However, the permission to collect samples then required active, voluntary participation by householders and thus, unlike many environmental studies, could not always follow the initial sample design. Impediments to participation ranged from 'willing but not available' during the designated sampling period, to a lack of response to requests to sample (assumed to be wholly unwilling or unable to participate). Additionally, a small number of unplanned samples were collected via new participants making themselves known to the sampling teams, during the sampling period. Here we examine the impact this has on the 'random' nature of the resulting data distribution, by comparison with the non-participating known supplies. We consider the implications this has on choice of statistical analysis methods to predict values and uncertainty at un-sampled locations.

  10. Estimation after classification using lot quality assurance sampling: corrections for curtailed sampling with application to evaluating polio vaccination campaigns.

    PubMed

    Olives, Casey; Valadez, Joseph J; Pagano, Marcello

    2014-03-01

    To assess the bias incurred when curtailment of Lot Quality Assurance Sampling (LQAS) is ignored, to present unbiased estimators, to consider the impact of cluster sampling by simulation and to apply our method to published polio immunization data from Nigeria. We present estimators of coverage when using two kinds of curtailed LQAS strategies: semicurtailed and curtailed. We study the proposed estimators with independent and clustered data using three field-tested LQAS designs for assessing polio vaccination coverage, with samples of size 60 and decision rules of 9, 21 and 33, and compare them to biased maximum likelihood estimators. Lastly, we present estimates of polio vaccination coverage from previously published data in 20 local government authorities (LGAs) from five Nigerian states. Simulations illustrate substantial bias if one ignores the curtailed sampling design. Proposed estimators show no bias. Clustering does not affect the bias of these estimators. Across simulations, standard errors show signs of inflation as clustering increases. Neither sampling strategy nor LQAS design influences estimates of polio vaccination coverage in 20 Nigerian LGAs. When coverage is low, semicurtailed LQAS strategies considerably reduces the sample size required to make a decision. Curtailed LQAS designs further reduce the sample size when coverage is high. Results presented dispel the misconception that curtailed LQAS data are unsuitable for estimation. These findings augment the utility of LQAS as a tool for monitoring vaccination efforts by demonstrating that unbiased estimation using curtailed designs is not only possible but these designs also reduce the sample size. © 2014 John Wiley & Sons Ltd.

  11. Design to monitor trend in abundance and presence of American beaver (Castor canadensis) at the national forest scale.

    PubMed

    Beck, Jeffrey L; Dauwalter, Daniel C; Gerow, Kenneth G; Hayward, Gregory D

    2010-05-01

    Wildlife conservationists design monitoring programs to assess population dynamics, project future population states, and evaluate the impacts of management actions on populations. Because agency mandates and conservation laws call for monitoring data to elicit management responses, it is imperative to design programs that match the administrative scale for which management decisions are made. We describe a program to monitor population trends in American beaver (Castor canadensis) on the US Department of Agriculture, Black Hills National Forest (BHNF) in southwestern South Dakota and northeastern Wyoming, USA. Beaver have been designated as a management indicator species on the BHNF because of their association with riparian and aquatic habitats and its status as a keystone species. We designed our program to monitor the density of beaver food caches (abundance) within sampling units with beaver and the proportion of sampling units with beavers present at the scale of a national forest. We designated watersheds as sampling units in a stratified random sampling design that we developed based on habitat modeling results. Habitat modeling indicated that the most suitable beaver habitat was near perennial water, near aspen (Populus tremuloides) and willow (Salix spp.), and in low gradient streams at lower elevations. Results from the initial monitoring period in October 2007 allowed us to assess costs and logistical considerations, validate our habitat model, and conduct power analyses to assess whether our sampling design could detect the level of declines in beaver stated in the monitoring objectives. Beaver food caches were located in 20 of 52 sampled watersheds. Monitoring 20 to 25 watersheds with beaver should provide sufficient power to detect 15-40% declines in the beaver food cache index as well as a twofold decline in the odds of beaver being present in watersheds. Indices of abundance, such as the beaver food cache index, provide a practical measure of population status to conduct long-term monitoring across broad landscapes such as national forests.

  12. Comparison of Optimal Design Methods in Inverse Problems

    PubMed Central

    Banks, H. T.; Holm, Kathleen; Kappel, Franz

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762

  13. Multi-Reader ROC studies with Split-Plot Designs: A Comparison of Statistical Methods

    PubMed Central

    Obuchowski, Nancy A.; Gallas, Brandon D.; Hillis, Stephen L.

    2012-01-01

    Rationale and Objectives Multi-reader imaging trials often use a factorial design, where study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of the design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper we compare three methods of analysis for the split-plot design. Materials and Methods Three statistical methods are presented: Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean ANOVA approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power and confidence interval coverage of the three test statistics. Results The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% CIs fall close to the nominal coverage for small and large sample sizes. Conclusions The split-plot MRMC study design can be statistically efficient compared with the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rate, similar power, and nominal CI coverage, are available for this study design. PMID:23122570

  14. Design of a sampling plan to detect ochratoxin A in green coffee.

    PubMed

    Vargas, E A; Whitaker, T B; Dos Santos, E A; Slate, A B; Lima, F B; Franca, R C A

    2006-01-01

    The establishment of maximum limits for ochratoxin A (OTA) in coffee by importing countries requires that coffee-producing countries develop scientifically based sampling plans to assess OTA contents in lots of green coffee before coffee enters the market thus reducing consumer exposure to OTA, minimizing the number of lots rejected, and reducing financial loss for producing countries. A study was carried out to design an official sampling plan to determine OTA in green coffee produced in Brazil. Twenty-five lots of green coffee (type 7 - approximately 160 defects) were sampled according to an experimental protocol where 16 test samples were taken from each lot (total of 16 kg) resulting in a total of 800 OTA analyses. The total, sampling, sample preparation, and analytical variances were 10.75 (CV = 65.6%), 7.80 (CV = 55.8%), 2.84 (CV = 33.7%), and 0.11 (CV = 6.6%), respectively, assuming a regulatory limit of 5 microg kg(-1) OTA and using a 1 kg sample, Romer RAS mill, 25 g sub-samples, and high performance liquid chromatography. The observed OTA distribution among the 16 OTA sample results was compared to several theoretical distributions. The 2 parameter-log normal distribution was selected to model OTA test results for green coffee as it gave the best fit across all 25 lot distributions. Specific computer software was developed using the variance and distribution information to predict the probability of accepting or rejecting coffee lots at specific OTA concentrations. The acceptation probability was used to compute an operating characteristic (OC) curve specific to a sampling plan design. The OC curve was used to predict the rejection of good lots (sellers' or exporters' risk) and the acceptance of bad lots (buyers' or importers' risk).

  15. An analysis of adaptive design variations on the sequential parallel comparison design for clinical trials.

    PubMed

    Mi, Michael Y; Betensky, Rebecca A

    2013-04-01

    Currently, a growing placebo response rate has been observed in clinical trials for antidepressant drugs, a phenomenon that has made it increasingly difficult to demonstrate efficacy. The sequential parallel comparison design (SPCD) is a clinical trial design that was proposed to address this issue. The SPCD theoretically has the potential to reduce the sample-size requirement for a clinical trial and to simultaneously enrich the study population to be less responsive to the placebo. Because the basic SPCD already reduces the placebo response by removing placebo responders between the first and second phases of a trial, the purpose of this study was to examine whether we can further improve the efficiency of the basic SPCD and whether we can do so when the projected underlying drug and placebo response rates differ considerably from the actual ones. Three adaptive designs that used interim analyses to readjust the length of study duration for individual patients were tested to reduce the sample-size requirement or increase the statistical power of the SPCD. Various simulations of clinical trials using the SPCD with interim analyses were conducted to test these designs through calculations of empirical power. From the simulations, we found that the adaptive designs can recover unnecessary resources spent in the traditional SPCD trial format with overestimated initial sample sizes and provide moderate gains in power. Under the first design, results showed up to a 25% reduction in person-days, with most power losses below 5%. In the second design, results showed up to a 8% reduction in person-days with negligible loss of power. In the third design using sample-size re-estimation, up to 25% power was recovered from underestimated sample-size scenarios. Given the numerous possible test parameters that could have been chosen for the simulations, the study's results are limited to situations described by the parameters that were used and may not generalize to all possible scenarios. Furthermore, dropout of patients is not considered in this study. It is possible to make an already complex design such as the SPCD adaptive, and thus more efficient, potentially overcoming the problem of placebo response at lower cost. Ultimately, such a design may expedite the approval of future effective treatments.

  16. Optimal designs for population pharmacokinetic studies of the partner drugs co-administered with artemisinin derivatives in patients with uncomplicated falciparum malaria.

    PubMed

    Jamsen, Kris M; Duffull, Stephen B; Tarning, Joel; Lindegardh, Niklas; White, Nicholas J; Simpson, Julie A

    2012-07-11

    Artemisinin-based combination therapy (ACT) is currently recommended as first-line treatment for uncomplicated malaria, but of concern, it has been observed that the effectiveness of the main artemisinin derivative, artesunate, has been diminished due to parasite resistance. This reduction in effect highlights the importance of the partner drugs in ACT and provides motivation to gain more knowledge of their pharmacokinetic (PK) properties via population PK studies. Optimal design methodology has been developed for population PK studies, which analytically determines a sampling schedule that is clinically feasible and yields precise estimation of model parameters. In this work, optimal design methodology was used to determine sampling designs for typical future population PK studies of the partner drugs (mefloquine, lumefantrine, piperaquine and amodiaquine) co-administered with artemisinin derivatives. The optimal designs were determined using freely available software and were based on structural PK models from the literature and the key specifications of 100 patients with five samples per patient, with one sample taken on the seventh day of treatment. The derived optimal designs were then evaluated via a simulation-estimation procedure. For all partner drugs, designs consisting of two sampling schedules (50 patients per schedule) with five samples per patient resulted in acceptable precision of the model parameter estimates. The sampling schedules proposed in this paper should be considered in future population pharmacokinetic studies where intensive sampling over many days or weeks of follow-up is not possible due to either ethical, logistic or economical reasons.

  17. Low-sensitivity H ∞ filter design for linear delta operator systems with sampling time jitter

    NASA Astrophysics Data System (ADS)

    Guo, Xiang-Gui; Yang, Guang-Hong

    2012-04-01

    This article is concerned with the problem of designing H ∞ filters for a class of linear discrete-time systems with low-sensitivity to sampling time jitter via delta operator approach. Delta-domain model is used to avoid the inherent numerical ill-condition resulting from the use of the standard shift-domain model at high sampling rates. Based on projection lemma in combination with the descriptor system approach often used to solve problems related to delay, a novel bounded real lemma with three slack variables for delta operator systems is presented. A sensitivity approach based on this novel lemma is proposed to mitigate the effects of sampling time jitter on system performance. Then, the problem of designing a low-sensitivity filter can be reduced to a convex optimisation problem. An important consideration in the design of correlation filters is the optimal trade-off between the standard H ∞ criterion and the sensitivity of the transfer function with respect to sampling time jitter. Finally, a numerical example demonstrating the validity of the proposed design method is given.

  18. Are quantitative trait-dependent sampling designs cost-effective for analysis of rare and common variants?

    PubMed

    Yilmaz, Yildiz E; Bull, Shelley B

    2011-11-29

    Use of trait-dependent sampling designs in whole-genome association studies of sequence data can reduce total sequencing costs with modest losses of statistical efficiency. In a quantitative trait (QT) analysis of data from the Genetic Analysis Workshop 17 mini-exome for unrelated individuals in the Asian subpopulation, we investigate alternative designs that sequence only 50% of the entire cohort. In addition to a simple random sampling design, we consider extreme-phenotype designs that are of increasing interest in genetic association analysis of QTs, especially in studies concerned with the detection of rare genetic variants. We also evaluate a novel sampling design in which all individuals have a nonzero probability of being selected into the sample but in which individuals with extreme phenotypes have a proportionately larger probability. We take differential sampling of individuals with informative trait values into account by inverse probability weighting using standard survey methods which thus generalizes to the source population. In replicate 1 data, we applied the designs in association analysis of Q1 with both rare and common variants in the FLT1 gene, based on knowledge of the generating model. Using all 200 replicate data sets, we similarly analyzed Q1 and Q4 (which is known to be free of association with FLT1) to evaluate relative efficiency, type I error, and power. Simulation study results suggest that the QT-dependent selection designs generally yield greater than 50% relative efficiency compared to using the entire cohort, implying cost-effectiveness of 50% sample selection and worthwhile reduction of sequencing costs.

  19. A Study of Program Management Procedures in the Campus-Based and Basic Grant Programs. Technical Report No. 1: Sample Design, Student Survey Yield and Bias.

    ERIC Educational Resources Information Center

    Puma, Michael J.; Ellis, Richard

    Part of a study of program management procedures in the campus-based and Basic Educational Opportunity Grant programs reports on the design of the site visit component of the study and the results of the student survey, both in terms of the yield obtained and the quality of the data. Chapter 2 describes the design of sampling methodology employed…

  20. Construction of nested maximin designs based on successive local enumeration and modified novel global harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin

    2017-01-01

    Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.

  1. Improving the accuracy of livestock distribution estimates through spatial interpolation.

    PubMed

    Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy

    2012-11-01

    Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.

  2. Factorial experimental design intended for the optimization of the alumina purification conditions

    NASA Astrophysics Data System (ADS)

    Brahmi, Mounaouer; Ba, Mohamedou; Hidri, Yassine; Hassen, Abdennaceur

    2018-04-01

    The objective of this study was to determine the optimal conditions by using the experimental design methodology for the removal of some impurities associated with the alumina. So, three alumina qualities of different origins were investigated under the same conditions. The application of full-factorial designs on the samples of different qualities of alumina has followed the removal rates of the sodium oxide. However, a factorial experimental design was developed to describe the elimination of sodium oxide associated with the alumina. The experimental results showed that chemical analyze followed by XRF prior treatment of the samples, provided a primary idea concerning these prevailing impurities. Therefore, it appeared that the sodium oxide constituted the largest amount among all impurities. After the application of experimental design, analysis of the effectors different factors and their interactions showed that to have a better result, we should reduce the alumina quantity investigated and by against increase the stirring time for the first two samples, whereas, it was necessary to increase the alumina quantity in the case of the third sample. To expand and improve this research, we should take into account all existing impurities, since we found during this investigation that the levels of partial impurities increased after the treatment.

  3. Earth Entry Vehicle Design for Sample Return Missions Using M-SAPE

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid

    2015-01-01

    Most mission concepts that return sample material to Earth share one common element: an Earth entry vehicle (EEV). The primary focus of this paper is the examination of EEV design space for relevant sample return missions. Mission requirements for EEV concepts can be divided into three major groups: entry conditions (e.g., velocity and flight path angle), payload (e.g., mass, volume, and g-load limit), and vehicle characteristics (e.g., thermal protection system, structural topology, and landing concepts). The impacts of these requirements on the EEV design have been studied with an integrated system analysis tool, and the results will be discussed in details. In addition, through sensitivities analyses, critical design drivers that have been identified will be reviewed.

  4. Sample Canister Capture Mechanism for Mars Sample Return: Functional and environmental test of the elegant breadboard model

    NASA Astrophysics Data System (ADS)

    Carta, R.; Filippetto, D.; Lavagna, M.; Mailland, F.; Falkner, P.; Larranaga, J.

    2015-12-01

    The paper provides recent updates about the ESA study: Sample Canister Capture Mechanism Design and Breadboard developed under the Mars Robotic Exploration Preparation (MREP) program. The study is part of a set of feasibility studies aimed at identifying, analysing and developing technology concepts enabling the future international Mars Sample Return (MSR) mission. The MSR is a challenging mission with the purpose of sending a Lander to Mars, acquire samples from its surface/subsurface and bring them back to Earth for further, more in depth, analyses. In particular, the technology object of the Study is relevant to the Capture Mechanism that, mounted on the Orbiter, is in charge of capturing and securing the Sample Canister, or Orbiting Sample, accommodating the Martian soil samples, previously delivered in Martian orbit by the Mars Ascent Vehicle. An elegant breadboard of such a device was implemented and qualified under an ESA contract primed by OHB-CGS S.p.A. and supported by Politecnico di Milano, Department of Aerospace Science and Technology: in particular, functional tests were conducted at PoliMi-DAST and thermal and mechanical test campaigns occurred at Serms s.r.l. facility. The effectiveness of the breadboard design was demonstrated and the obtained results, together with the design challenges, issues and adopted solutions are critically presented in the paper. The breadboard was also tested on a parabolic flight to raise its Technology Readiness Level to 6; the microgravity experiment design, adopted solutions and results are presented as well in the paper.

  5. Human breath metabolomics using an optimized noninvasive exhaled breath condensate sampler

    PubMed Central

    Zamuruyev, Konstantin O.; Aksenov, Alexander A.; Pasamontes, Alberto; Brown, Joshua F.; Pettit, Dayna R.; Foutouhi, Soraya; Weimer, Bart C.; Schivo, Michael; Kenyon, Nicholas J.; Delplanque, Jean-Pierre; Davis, Cristina E.

    2017-01-01

    Exhaled breath condensate (EBC) analysis is a developing field with tremendous promise to advance personalized, non-invasive health diagnostics as new analytical instrumentation platforms and detection methods are developed. Multiple commercially-available and researcher-built experimental samplers are reported in the literature. However, there is very limited information available to determine an effective breath sampling approach, especially regarding the dependence of breath sample metabolomic content on the collection device design and sampling methodology. This lack of an optimal standard procedure results in a range of reported results that are sometimes contradictory. Here, we present a design of a portable human EBC sampler optimized for collection and preservation of the rich metabolomic content of breath. The performance of the engineered device is compared to two commercially available breath collection devices: the RTube™ and TurboDECCS. A number of design and performance parameters are considered, including: condenser temperature stability during sampling, collection efficiency, condenser material choice, and saliva contamination in the collected breath samples. The significance of the biological content of breath samples, collected with each device, is evaluated with a set of mass spectrometry methods and was the primary factor for evaluating device performance. The design includes an adjustable mass-size threshold for aerodynamic filtering of saliva droplets from the breath flow. Engineering an inexpensive device that allows efficient collection of metalomic-rich breath samples is intended to aid further advancement in the field of breath analysis for non-invasive health diagnostic. EBC sampling from human volunteers was performed under UC Davis IRB protocol 63701-3 (09/30/2014-07/07/2017). PMID:28004639

  6. Human breath metabolomics using an optimized non-invasive exhaled breath condensate sampler.

    PubMed

    Zamuruyev, Konstantin O; Aksenov, Alexander A; Pasamontes, Alberto; Brown, Joshua F; Pettit, Dayna R; Foutouhi, Soraya; Weimer, Bart C; Schivo, Michael; Kenyon, Nicholas J; Delplanque, Jean-Pierre; Davis, Cristina E

    2016-12-22

    Exhaled breath condensate (EBC) analysis is a developing field with tremendous promise to advance personalized, non-invasive health diagnostics as new analytical instrumentation platforms and detection methods are developed. Multiple commercially-available and researcher-built experimental samplers are reported in the literature. However, there is very limited information available to determine an effective breath sampling approach, especially regarding the dependence of breath sample metabolomic content on the collection device design and sampling methodology. This lack of an optimal standard procedure results in a range of reported results that are sometimes contradictory. Here, we present a design of a portable human EBC sampler optimized for collection and preservation of the rich metabolomic content of breath. The performance of the engineered device is compared to two commercially available breath collection devices: the RTube ™ and TurboDECCS. A number of design and performance parameters are considered, including: condenser temperature stability during sampling, collection efficiency, condenser material choice, and saliva contamination in the collected breath samples. The significance of the biological content of breath samples, collected with each device, is evaluated with a set of mass spectrometry methods and was the primary factor for evaluating device performance. The design includes an adjustable mass-size threshold for aerodynamic filtering of saliva droplets from the breath flow. Engineering an inexpensive device that allows efficient collection of metalomic-rich breath samples is intended to aid further advancement in the field of breath analysis for non-invasive health diagnostic. EBC sampling from human volunteers was performed under UC Davis IRB protocol 63701-3 (09/30/2014-07/07/2017).

  7. Evaluation of the Biological Sampling Kit (BiSKit) for Large-Area Surface Sampling

    PubMed Central

    Buttner, Mark P.; Cruz, Patricia; Stetzenbach, Linda D.; Klima-Comba, Amy K.; Stevens, Vanessa L.; Emanuel, Peter A.

    2004-01-01

    Current surface sampling methods for microbial contaminants are designed to sample small areas and utilize culture analysis. The total number of microbes recovered is low because a small area is sampled, making detection of a potential pathogen more difficult. Furthermore, sampling of small areas requires a greater number of samples to be collected, which delays the reporting of results, taxes laboratory resources and staffing, and increases analysis costs. A new biological surface sampling method, the Biological Sampling Kit (BiSKit), designed to sample large areas and to be compatible with testing with a variety of technologies, including PCR and immunoassay, was evaluated and compared to other surface sampling strategies. In experimental room trials, wood laminate and metal surfaces were contaminated by aerosolization of Bacillus atrophaeus spores, a simulant for Bacillus anthracis, into the room, followed by settling of the spores onto the test surfaces. The surfaces were sampled with the BiSKit, a cotton-based swab, and a foam-based swab. Samples were analyzed by culturing, quantitative PCR, and immunological assays. The results showed that the large surface area (1 m2) sampled with the BiSKit resulted in concentrations of B. atrophaeus in samples that were up to 10-fold higher than the concentrations obtained with the other methods tested. A comparison of wet and dry sampling with the BiSKit indicated that dry sampling was more efficient (efficiency, 18.4%) than wet sampling (efficiency, 11.3%). The sensitivities of detection of B. atrophaeus on metal surfaces were 42 ± 5.8 CFU/m2 for wet sampling and 100.5 ± 10.2 CFU/m2 for dry sampling. These results demonstrate that the use of a sampling device capable of sampling larger areas results in higher sensitivity than that obtained with currently available methods and has the advantage of sampling larger areas, thus requiring collection of fewer samples per site. PMID:15574898

  8. Design of Phase II Non-inferiority Trials.

    PubMed

    Jung, Sin-Ho

    2017-09-01

    With the development of inexpensive treatment regimens and less invasive surgical procedures, we are confronted with non-inferiority study objectives. A non-inferiority phase III trial requires a roughly four times larger sample size than that of a similar standard superiority trial. Because of the large required sample size, we often face feasibility issues to open a non-inferiority trial. Furthermore, due to lack of phase II non-inferiority trial design methods, we do not have an opportunity to investigate the efficacy of the experimental therapy through a phase II trial. As a result, we often fail to open a non-inferiority phase III trial and a large number of non-inferiority clinical questions still remain unanswered. In this paper, we want to develop some designs for non-inferiority randomized phase II trials with feasible sample sizes. At first, we review a design method for non-inferiority phase III trials. Subsequently, we propose three different designs for non-inferiority phase II trials that can be used under different settings. Each method is demonstrated with examples. Each of the proposed design methods is shown to require a reasonable sample size for non-inferiority phase II trials. The three different non-inferiority phase II trial designs are used under different settings, but require similar sample sizes that are typical for phase II trials.

  9. A general approach for sample size calculation for the three-arm 'gold standard' non-inferiority design.

    PubMed

    Stucke, Kathrin; Kieser, Meinhard

    2012-12-10

    In the three-arm 'gold standard' non-inferiority design, an experimental treatment, an active reference, and a placebo are compared. This design is becoming increasingly popular, and it is, whenever feasible, recommended for use by regulatory guidelines. We provide a general method to calculate the required sample size for clinical trials performed in this design. As special cases, the situations of continuous, binary, and Poisson distributed outcomes are explored. Taking into account the correlation structure of the involved test statistics, the proposed approach leads to considerable savings in sample size as compared with application of ad hoc methods for all three scale levels. Furthermore, optimal sample size allocation ratios are determined that result in markedly smaller total sample sizes as compared with equal assignment. As optimal allocation makes the active treatment groups larger than the placebo group, implementation of the proposed approach is also desirable from an ethical viewpoint. Copyright © 2012 John Wiley & Sons, Ltd.

  10. Quality-assurance design applied to an assessment of agricultural pesticides in ground water from carbonate bedrock aquifers in the Great Valley of eastern Pennsylvania

    USGS Publications Warehouse

    Breen, Kevin J.

    2000-01-01

    Assessments to determine whether agricultural pesticides are present in ground water are performed by the Commonwealth of Pennsylvania under the aquifer monitoring provisions of the State Pesticides and Ground Water Strategy. Pennsylvania's Department of Agriculture conducts the monitoring and collects samples; the Department of Environmental Protection (PaDEP) Laboratory analyzes the samples to measure pesticide concentration. To evaluate the quality of the measurements of pesticide concentration for a groundwater assessment, a quality-assurance design was developed and applied to a selected assessment area in Pennsylvania. This report describes the quality-assurance design, describes how and where the design was applied, describes procedures used to collect and analyze samples and to evaluate the results, and summarizes the quality assurance results along with the assessment results.The design was applied in an agricultural area of the Delaware River Basin in Berks, Lebanon, Lehigh, and Northampton Counties to evaluate the bias and variability in laboratory results for pesticides. The design—with random spatial and temporal components—included four data-quality objectives for bias and variability. The spatial design was primary and represented an area comprising 30 sampling cells. A quality-assurance sampling frequency of 20 percent of cells was selected to ensure a sample number of five or more for analysis. Quality-control samples included blanks, spikes, and replicates of laboratory water and spikes, replicates, and 2-lab splits of groundwater. Two analytical laboratories, the PaDEP Laboratory and a U.S. Geological Survey Laboratory, were part of the design. Bias and variability were evaluated by use of data collected from October 1997 through January 1998 for alachlor, atrazine, cyanazine, metolachlor, simazine, pendimethalin, metribuzin, and chlorpyrifos.Results of analyses of field blanks indicate that collection, processing, transport, and laboratory analysis procedures did not contaminate the samples; there were no false-positive results. Pesticides were detected in water when pesticides were spiked into (added to) samples. There were no false negatives for the eight pesticides in all spiked samples. Negative bias was characteristic of analytical results for the eight pesticides, and bias was generally in excess of 10 percent from the ‘true’ or expected concentration (34 of 39 analyses, or 87 percent of the ground-water results) for pesticide concentrations ranging from 0.31 to 0.51 mg/L (micrograms per liter). The magnitude of the negative bias for the eight pesticides, with the exception of cyanazine, would result in reported concentrations commonly 75-80 percent of the expected concentration in the water sample. The bias for cyanazine was negative and within 10 percent of the expected concentration. A comparison of spiked pesticide-concentration recoveries in laboratory water and ground water indicated no effect of the ground-water matrix, and matrix interference was not a source of the negative bias. Results for the laboratory-water spikes submitted in triplicate showed large variability for recoveries of atrazine, cyanazine, and pendimethalin. The relative standard deviation (RSD) was used as a measure of method variability over the course of the study for laboratory waters at a concentration of 0.4 mg/L. An RSD of about 11 percent (or about ?0.05 mg/L)characterizes the method results for alachlor, chlorpyrifos, metolachlor, metribuzin, and simazine. Atrazine and pendimethalin have RSD values of about 17 and 23 percent, respectively. Cyanazine showed the largest RSD at nearly 51 percent. The pesticides with low variability in laboratory-water spikes also had low variability in ground water.The assessment results showed that atrazinewas the most commonly detected pesticide in ground water in the assessment area. Atrazine was detected in water from 22 of the 28 wells sampled, and recovery results for atrazine were some of the worst (largest negative bias). Concentrations of the eight pesticides in ground water from wells were generally less than 0.3 µg/L. Only six individual measurements of the concentrations in water from six of the wells were at or above 0.3 µg/L, five for atrazine and one for metolachlor. There were eight additional detections of metolachlor and simazine at concentrations less than 0.1 µg/L. No well water contained more than one pesticide at concentra-tions at or above 0.3 µg/L. Evidence exists, how-ever, for a pattern of co-occurrence of metolachlor and simazine at low concentrations with higher concentrations of atrazine.Large variability in replicate samples and negative bias for pesticide recovery from spiked samples indicate the need to use data for pesticide recovery in the interpretation of measured pesti-cide concentrations in ground water. Data from samples spiked with known amounts of pesticides were a critical component of a quality-assurance design for the monitoring component of the Pesti-cides and Ground Water Strategy.Trigger concentrations, the concentrations that require action under the Pesticides and Ground Water Strategy, should be considered maximums for action. This consideration is needed because of the magnitude of negative bias.

  11. Low-cost floating emergence net and bottle trap: Comparison of two designs

    USGS Publications Warehouse

    Cadmus, Pete; Pomeranz, Justin; Kraus, Johanna M.

    2016-01-01

    Sampling emergent aquatic insects is of interest to many freshwater ecologists. Many quantitative emergence traps require the use of aspiration for collection. However, aspiration is infeasible in studies with large amounts of replication that is often required in large biomonitoring projects. We designed an economic, collapsible pyramid-shaped floating emergence trap with an external collection bottle that avoids the need for aspiration. This design was compared experimentally to a design of similar dimensions that relied on aspiration to ensure comparable results. The pyramid-shaped design captured twice as many total emerging insects. When a preservative was used in bottle collectors, >95% of the emergent abundance was collected in the bottle. When no preservative was used, >81% of the total insects were collected from the bottle. In addition to capturing fewer emergent insects, the traps that required aspiration took significantly longer to sample. Large studies and studies sampling remote locations could benefit from the economical construction, speed of sampling, and capture efficiency.

  12. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    PubMed

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  13. Electric Propulsion System Selection Process for Interplanetary Missions

    NASA Technical Reports Server (NTRS)

    Landau, Damon; Chase, James; Kowalkowski, Theresa; Oh, David; Randolph, Thomas; Sims, Jon; Timmerman, Paul

    2008-01-01

    The disparate design problems of selecting an electric propulsion system, launch vehicle, and flight time all have a significant impact on the cost and robustness of a mission. The effects of these system choices combine into a single optimization of the total mission cost, where the design constraint is a required spacecraft neutral (non-electric propulsion) mass. Cost-optimal systems are designed for a range of mass margins to examine how the optimal design varies with mass growth. The resulting cost-optimal designs are compared with results generated via mass optimization methods. Additional optimizations with continuous system parameters address the impact on mission cost due to discrete sets of launch vehicle, power, and specific impulse. The examined mission set comprises a near-Earth asteroid sample return, multiple main belt asteroid rendezvous, comet rendezvous, comet sample return, and a mission to Saturn.

  14. Aerosol sampling system for collection of Capstone depleted uranium particles in a high-energy environment.

    PubMed

    Holmes, Thomas D; Guilmette, Raymond A; Cheng, Yung Sung; Parkhurst, Mary Ann; Hoover, Mark D

    2009-03-01

    The Capstone Depleted Uranium (DU) Aerosol Study was undertaken to obtain aerosol samples resulting from a large-caliber DU penetrator striking an Abrams or Bradley test vehicle. The sampling strategy was designed to (1) optimize the performance of the samplers and maintain their integrity in the extreme environment created during perforation of an armored vehicle by a DU penetrator, (2) collect aerosols as a function of time post perforation, and (3) obtain size-classified samples for analysis of chemical composition, particle morphology, and solubility in lung fluid. This paper describes the experimental setup and sampling methodologies used to achieve these objectives. Custom-designed arrays of sampling heads were secured to the inside of the target in locations approximating the breathing zones of the crew locations in the test vehicles. Each array was designed to support nine filter cassettes and nine cascade impactors mounted with quick-disconnect fittings. Shielding and sampler placement strategies were used to minimize sampler loss caused by the penetrator impact and the resulting fragments of eroded penetrator and perforated armor. A cyclone train was used to collect larger quantities of DU aerosol for measurement of chemical composition and solubility. A moving filter sample was used to obtain semicontinuous samples for DU concentration determination. Control for the air samplers was provided by five remotely located valve control and pressure monitoring units located inside and around the test vehicle. These units were connected to a computer interface chassis and controlled using a customized LabVIEW engineering computer control program. The aerosol sampling arrays and control systems for the Capstone study provided the needed aerosol samples for physicochemical analysis, and the resultant data were used for risk assessment of exposure to DU aerosol.

  15. Aerosol Sampling System for Collection of Capstone Depleted Uranium Particles in a High-Energy Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holmes, Thomas D.; Guilmette, Raymond A.; Cheng, Yung-Sung

    2009-03-01

    The Capstone Depleted Uranium Aerosol Study was undertaken to obtain aerosol samples resulting from a kinetic-energy cartridge with a large-caliber depleted uranium (DU) penetrator striking an Abrams or Bradley test vehicle. The sampling strategy was designed to (1) optimize the performance of the samplers and maintain their integrity in the extreme environment created during perforation of an armored vehicle by a DU penetrator, (2) collect aerosols as a function of time post-impact, and (3) obtain size-classified samples for analysis of chemical composition, particle morphology, and solubility in lung fluid. This paper describes the experimental setup and sampling methodologies used tomore » achieve these objectives. Custom-designed arrays of sampling heads were secured to the inside of the target in locations approximating the breathing zones of the vehicle commander, loader, gunner, and driver. Each array was designed to support nine filter cassettes and nine cascade impactors mounted with quick-disconnect fittings. Shielding and sampler placement strategies were used to minimize sampler loss caused by the penetrator impact and the resulting fragments of eroded penetrator and perforated armor. A cyclone train was used to collect larger quantities of DU aerosol for chemical composition and solubility. A moving filter sample was used to obtain semicontinuous samples for depleted uranium concentration determination. Control for the air samplers was provided by five remotely located valve control and pressure monitoring units located inside and around the test vehicle. These units were connected to a computer interface chassis and controlled using a customized LabVIEW engineering computer control program. The aerosol sampling arrays and control systems for the Capstone study provided the needed aerosol samples for physicochemical analysis, and the resultant data were used for risk assessment of exposure to DU aerosol.« less

  16. Biomarker discovery study design for type 1 diabetes in The Environmental Determinants of Diabetes in the Young (TEDDY) study.

    PubMed

    Lee, Hye-Seung; Burkhardt, Brant R; McLeod, Wendy; Smith, Susan; Eberhard, Chris; Lynch, Kristian; Hadley, David; Rewers, Marian; Simell, Olli; She, Jin-Xiong; Hagopian, Bill; Lernmark, Ake; Akolkar, Beena; Ziegler, Anette G; Krischer, Jeffrey P

    2014-07-01

    The Environmental Determinants of Diabetes in the Young planned biomarker discovery studies on longitudinal samples for persistent confirmed islet cell autoantibodies and type 1 diabetes using dietary biomarkers, metabolomics, microbiome/viral metagenomics and gene expression. This article describes the details of planning The Environmental Determinants of Diabetes in the Young biomarker discovery studies using a nested case-control design that was chosen as an alternative to the full cohort analysis. In the frame of a nested case-control design, it guides the choice of matching factors, selection of controls, preparation of external quality control samples and reduction of batch effects along with proper sample allocation. Our design is to reduce potential bias and retain study power while reducing the costs by limiting the numbers of samples requiring laboratory analyses. It also covers two primary end points (the occurrence of diabetes-related autoantibodies and the diagnosis of type 1 diabetes). The resulting list of case-control matched samples for each laboratory was augmented with external quality control samples. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Treatment Trials for Neonatal Seizures: The Effect of Design on Sample Size

    PubMed Central

    Stevenson, Nathan J.; Boylan, Geraldine B.; Hellström-Westas, Lena; Vanhatalo, Sampsa

    2016-01-01

    Neonatal seizures are common in the neonatal intensive care unit. Clinicians treat these seizures with several anti-epileptic drugs (AEDs) to reduce seizures in a neonate. Current AEDs exhibit sub-optimal efficacy and several randomized control trials (RCT) of novel AEDs are planned. The aim of this study was to measure the influence of trial design on the required sample size of a RCT. We used seizure time courses from 41 term neonates with hypoxic ischaemic encephalopathy to build seizure treatment trial simulations. We used five outcome measures, three AED protocols, eight treatment delays from seizure onset (Td) and four levels of trial AED efficacy to simulate different RCTs. We performed power calculations for each RCT design and analysed the resultant sample size. We also assessed the rate of false positives, or placebo effect, in typical uncontrolled studies. We found that the false positive rate ranged from 5 to 85% of patients depending on RCT design. For controlled trials, the choice of outcome measure had the largest effect on sample size with median differences of 30.7 fold (IQR: 13.7–40.0) across a range of AED protocols, Td and trial AED efficacy (p<0.001). RCTs that compared the trial AED with positive controls required sample sizes with a median fold increase of 3.2 (IQR: 1.9–11.9; p<0.001). Delays in AED administration from seizure onset also increased the required sample size 2.1 fold (IQR: 1.7–2.9; p<0.001). Subgroup analysis showed that RCTs in neonates treated with hypothermia required a median fold increase in sample size of 2.6 (IQR: 2.4–3.0) compared to trials in normothermic neonates (p<0.001). These results show that RCT design has a profound influence on the required sample size. Trials that use a control group, appropriate outcome measure, and control for differences in Td between groups in analysis will be valid and minimise sample size. PMID:27824913

  18. Designing a monitoring program to estimate estuarine survival of anadromous salmon smolts: simulating the effect of sample design on inference

    USGS Publications Warehouse

    Romer, Jeremy D.; Gitelman, Alix I.; Clements, Shaun; Schreck, Carl B.

    2015-01-01

    A number of researchers have attempted to estimate salmonid smolt survival during outmigration through an estuary. However, it is currently unclear how the design of such studies influences the accuracy and precision of survival estimates. In this simulation study we consider four patterns of smolt survival probability in the estuary, and test the performance of several different sampling strategies for estimating estuarine survival assuming perfect detection. The four survival probability patterns each incorporate a systematic component (constant, linearly increasing, increasing and then decreasing, and two pulses) and a random component to reflect daily fluctuations in survival probability. Generally, spreading sampling effort (tagging) across the season resulted in more accurate estimates of survival. All sampling designs in this simulation tended to under-estimate the variation in the survival estimates because seasonal and daily variation in survival probability are not incorporated in the estimation procedure. This under-estimation results in poorer performance of estimates from larger samples. Thus, tagging more fish may not result in better estimates of survival if important components of variation are not accounted for. The results of our simulation incorporate survival probabilities and run distribution data from previous studies to help illustrate the tradeoffs among sampling strategies in terms of the number of tags needed and distribution of tagging effort. This information will assist researchers in developing improved monitoring programs and encourage discussion regarding issues that should be addressed prior to implementation of any telemetry-based monitoring plan. We believe implementation of an effective estuary survival monitoring program will strengthen the robustness of life cycle models used in recovery plans by providing missing data on where and how much mortality occurs in the riverine and estuarine portions of smolt migration. These data could result in better informed management decisions and assist in guidance for more effective estuarine restoration projects.

  19. Probabilistic Design of a Mars Sample Return Earth Entry Vehicle Thermal Protection System

    NASA Technical Reports Server (NTRS)

    Dec, John A.; Mitcheltree, Robert A.

    2002-01-01

    The driving requirement for design of a Mars Sample Return mission is to assure containment of the returned samples. Designing to, and demonstrating compliance with, such a requirement requires physics based tools that establish the relationship between engineer's sizing margins and probabilities of failure. The traditional method of determining margins on ablative thermal protection systems, while conservative, provides little insight into the actual probability of an over-temperature during flight. The objective of this paper is to describe a new methodology for establishing margins on sizing the thermal protection system (TPS). Results of this Monte Carlo approach are compared with traditional methods.

  20. Nonlinear inversion of electrical resistivity imaging using pruning Bayesian neural networks

    NASA Astrophysics Data System (ADS)

    Jiang, Fei-Bo; Dai, Qian-Wei; Dong, Li

    2016-06-01

    Conventional artificial neural networks used to solve electrical resistivity imaging (ERI) inversion problem suffer from overfitting and local minima. To solve these problems, we propose to use a pruning Bayesian neural network (PBNN) nonlinear inversion method and a sample design method based on the K-medoids clustering algorithm. In the sample design method, the training samples of the neural network are designed according to the prior information provided by the K-medoids clustering results; thus, the training process of the neural network is well guided. The proposed PBNN, based on Bayesian regularization, is used to select the hidden layer structure by assessing the effect of each hidden neuron to the inversion results. Then, the hyperparameter α k , which is based on the generalized mean, is chosen to guide the pruning process according to the prior distribution of the training samples under the small-sample condition. The proposed algorithm is more efficient than other common adaptive regularization methods in geophysics. The inversion of synthetic data and field data suggests that the proposed method suppresses the noise in the neural network training stage and enhances the generalization. The inversion results with the proposed method are better than those of the BPNN, RBFNN, and RRBFNN inversion methods as well as the conventional least squares inversion.

  1. A survey sampling approach for pesticide monitoring of community water systems using groundwater as a drinking water source.

    PubMed

    Whitmore, Roy W; Chen, Wenlin

    2013-12-04

    The ability to infer human exposure to substances from drinking water using monitoring data helps determine and/or refine potential risks associated with drinking water consumption. We describe a survey sampling approach and its application to an atrazine groundwater monitoring study to adequately characterize upper exposure centiles and associated confidence intervals with predetermined precision. Study design and data analysis included sampling frame definition, sample stratification, sample size determination, allocation to strata, analysis weights, and weighted population estimates. Sampling frame encompassed 15 840 groundwater community water systems (CWS) in 21 states throughout the U. S. Median, and 95th percentile atrazine concentrations were 0.0022 and 0.024 ppb, respectively, for all CWS. Statistical estimates agreed with historical monitoring results, suggesting that the study design was adequate and robust. This methodology makes no assumptions regarding the occurrence distribution (e.g., lognormality); thus analyses based on the design-induced distribution provide the most robust basis for making inferences from the sample to target population.

  2. Improving the Acquisition and Management of Sample Curation Data

    NASA Technical Reports Server (NTRS)

    Todd, Nancy S.; Evans, Cindy A.; Labasse, Dan

    2011-01-01

    This paper discusses the current sample documentation processes used during and after a mission, examines the challenges and special considerations needed for designing effective sample curation data systems, and looks at the results of a simulated sample result mission and the lessons learned from this simulation. In addition, it introduces a new data architecture for an integrated sample Curation data system being implemented at the NASA Astromaterials Acquisition and Curation department and discusses how it improves on existing data management systems.

  3. Plant Disease Severity Assessment-How Rater Bias, Assessment Method, and Experimental Design Affect Hypothesis Testing and Resource Use Efficiency.

    PubMed

    Chiang, Kuo-Szu; Bock, Clive H; Lee, I-Hsuan; El Jarroudi, Moussa; Delfosse, Philippe

    2016-12-01

    The effect of rater bias and assessment method on hypothesis testing was studied for representative experimental designs for plant disease assessment using balanced and unbalanced data sets. Data sets with the same number of replicate estimates for each of two treatments are termed "balanced" and those with unequal numbers of replicate estimates are termed "unbalanced". The three assessment methods considered were nearest percent estimates (NPEs), an amended 10% incremental scale, and the Horsfall-Barratt (H-B) scale. Estimates of severity of Septoria leaf blotch on leaves of winter wheat were used to develop distributions for a simulation model. The experimental designs are presented here in the context of simulation experiments which consider the optimal design for the number of specimens (individual units sampled) and the number of replicate estimates per specimen for a fixed total number of observations (total sample size for the treatments being compared). The criterion used to gauge each method was the power of the hypothesis test. As expected, at a given fixed number of observations, the balanced experimental designs invariably resulted in a higher power compared with the unbalanced designs at different disease severity means, mean differences, and variances. Based on these results, with unbiased estimates using NPE, the recommended number of replicate estimates taken per specimen is 2 (from a sample of specimens of at least 30), because this conserves resources. Furthermore, for biased estimates, an apparent difference in the power of the hypothesis test was observed between assessment methods and between experimental designs. Results indicated that, regardless of experimental design or rater bias, an amended 10% incremental scale has slightly less power compared with NPEs, and that the H-B scale is more likely than the others to cause a type II error. These results suggest that choice of assessment method, optimizing sample number and number of replicate estimates, and using a balanced experimental design are important criteria to consider to maximize the power of hypothesis tests for comparing treatments using disease severity estimates.

  4. Effect of Study Design on Sample Size in Studies Intended to Evaluate Bioequivalence of Inhaled Short-Acting β-Agonist Formulations.

    PubMed

    Zeng, Yaohui; Singh, Sachinkumar; Wang, Kai; Ahrens, Richard C

    2018-04-01

    Pharmacodynamic studies that use methacholine challenge to assess bioequivalence of generic and innovator albuterol formulations are generally designed per published Food and Drug Administration guidance, with 3 reference doses and 1 test dose (3-by-1 design). These studies are challenging and expensive to conduct, typically requiring large sample sizes. We proposed 14 modified study designs as alternatives to the Food and Drug Administration-recommended 3-by-1 design, hypothesizing that adding reference and/or test doses would reduce sample size and cost. We used Monte Carlo simulation to estimate sample size. Simulation inputs were selected based on published studies and our own experience with this type of trial. We also estimated effects of these modified study designs on study cost. Most of these altered designs reduced sample size and cost relative to the 3-by-1 design, some decreasing cost by more than 40%. The most effective single study dose to add was 180 μg of test formulation, which resulted in an estimated 30% relative cost reduction. Adding a single test dose of 90 μg was less effective, producing only a 13% cost reduction. Adding a lone reference dose of either 180, 270, or 360 μg yielded little benefit (less than 10% cost reduction), whereas adding 720 μg resulted in a 19% cost reduction. Of the 14 study design modifications we evaluated, the most effective was addition of both a 90-μg test dose and a 720-μg reference dose (42% cost reduction). Combining a 180-μg test dose and a 720-μg reference dose produced an estimated 36% cost reduction. © 2017, The Authors. The Journal of Clinical Pharmacology published by Wiley Periodicals, Inc. on behalf of American College of Clinical Pharmacology.

  5. Effects of sample survey design on the accuracy of classification tree models in species distribution models

    Treesearch

    Thomas C. Edwards; D. Richard Cutler; Niklaus E. Zimmermann; Linda Geiser; Gretchen G. Moisen

    2006-01-01

    We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by...

  6. Multilevel Factorial Experiments for Developing Behavioral Interventions: Power, Sample Size, and Resource Considerations†

    PubMed Central

    Dziak, John J.; Nahum-Shani, Inbal; Collins, Linda M.

    2012-01-01

    Factorial experimental designs have many potential advantages for behavioral scientists. For example, such designs may be useful in building more potent interventions, by helping investigators to screen several candidate intervention components simultaneously and decide which are likely to offer greater benefit before evaluating the intervention as a whole. However, sample size and power considerations may challenge investigators attempting to apply such designs, especially when the population of interest is multilevel (e.g., when students are nested within schools, or employees within organizations). In this article we examine the feasibility of factorial experimental designs with multiple factors in a multilevel, clustered setting (i.e., of multilevel multifactor experiments). We conduct Monte Carlo simulations to demonstrate how design elements such as the number of clusters, the number of lower-level units, and the intraclass correlation affect power. Our results suggest that multilevel, multifactor experiments are feasible for factor-screening purposes, because of the economical properties of complete and fractional factorial experimental designs. We also discuss resources for sample size planning and power estimation for multilevel factorial experiments. These results are discussed from a resource management perspective, in which the goal is to choose a design that maximizes the scientific benefit using the resources available for an investigation. PMID:22309956

  7. Multilevel factorial experiments for developing behavioral interventions: power, sample size, and resource considerations.

    PubMed

    Dziak, John J; Nahum-Shani, Inbal; Collins, Linda M

    2012-06-01

    Factorial experimental designs have many potential advantages for behavioral scientists. For example, such designs may be useful in building more potent interventions by helping investigators to screen several candidate intervention components simultaneously and to decide which are likely to offer greater benefit before evaluating the intervention as a whole. However, sample size and power considerations may challenge investigators attempting to apply such designs, especially when the population of interest is multilevel (e.g., when students are nested within schools, or when employees are nested within organizations). In this article, we examine the feasibility of factorial experimental designs with multiple factors in a multilevel, clustered setting (i.e., of multilevel, multifactor experiments). We conduct Monte Carlo simulations to demonstrate how design elements-such as the number of clusters, the number of lower-level units, and the intraclass correlation-affect power. Our results suggest that multilevel, multifactor experiments are feasible for factor-screening purposes because of the economical properties of complete and fractional factorial experimental designs. We also discuss resources for sample size planning and power estimation for multilevel factorial experiments. These results are discussed from a resource management perspective, in which the goal is to choose a design that maximizes the scientific benefit using the resources available for an investigation. (c) 2012 APA, all rights reserved

  8. Optimum design of Geodesic dome’s jointing system

    NASA Astrophysics Data System (ADS)

    Tran, Huy. T.

    2018-04-01

    This study attempts to create a new design for joint connector of Geodesic dome. A new type of joint connector design is proposed for flexible rotating connection; comparing it to another, this design is cheaper and workable. After calculating the bearing capacity of the sample according to EC3 and Vietnam standard TCVN 5575-2012, FEM model of the design sample is carried out in many specific situation to consider the stress distribution, the deformation, the local destruction… in the connector. The analytical results and the FE data are consistent. The FE analysis also points out the behavior of some details that simple calculation cannot show. Hence, we can choose the optimum design of joint connector.

  9. Trajectory Design for a Single-String Impactor Concept

    NASA Technical Reports Server (NTRS)

    Dono Perez, Andres; Burton, Roland; Stupl, Jan; Mauro, David

    2017-01-01

    This paper introduces a trajectory design for a secondary spacecraft concept to augment science return in interplanetary missions. The concept consist of a single-string probe with a kinetic impactor on board that generates an artificial plume to perform in-situ sampling. The trajectory design was applied to a particular case study that samples ejecta particles from the Jovian moon Europa. Results were validated using statistical analysis. Details regarding the navigation, targeting and disposal challenges related to this concept are presented herein.

  10. Modular biowaste monitoring system

    NASA Technical Reports Server (NTRS)

    Fogal, G. L.

    1975-01-01

    The objective of the Modular Biowaste Monitoring System Program was to generate and evaluate hardware for supporting shuttle life science experimental and diagnostic programs. An initial conceptual design effort established requirements and defined an overall modular system for the collection, measurement, sampling and storage of urine and feces biowastes. This conceptual design effort was followed by the design, fabrication and performance evaluation of a flight prototype model urine collection, volume measurement and sampling capability. No operational or performance deficiencies were uncovered as a result of the performance evaluation tests.

  11. Brain Jogging Training to Improve Motivation and Learning Result of Tennis Skills

    NASA Astrophysics Data System (ADS)

    Tafaqur, M.; Komarudin; Mulyana; Saputra, M. Y.

    2017-03-01

    This research is aimed to determine the effect of brain jogging towards improvement of motivation and learning result of tennis skills. The method used in this research is experimental method. The population of this research is 15 tennis athletes of Core Siliwangi Bandung Tennis Club. The sampling technique used in this research is purposive sampling technique. Sample of this research is the 10 tennis athletes of Core Siliwangi Bandung Tennis Club. Design used for this research is pretest-posttest group design. Data analysis technique used in this research is by doing Instrument T-test to measure motivation using The Sport Motivation Scale questionnaire (SMS-28) and Instrument to measure learning result of tennis skill by using tennis skills test, which include: (1) forehand test, (2) backhand test, and (3) service placement test. The result of this research showed that brain jogging significantly impact the improvement of motivation and learning result of tennis skills.

  12. Optimal color design of psychological counseling room by design of experiments and response surface methodology.

    PubMed

    Liu, Wenjuan; Ji, Jianlin; Chen, Hua; Ye, Chenyu

    2014-01-01

    Color is one of the most powerful aspects of a psychological counseling environment. Little scientific research has been conducted on color design and much of the existing literature is based on observational studies. Using design of experiments and response surface methodology, this paper proposes an optimal color design approach for transforming patients' perception into color elements. Six indices, pleasant-unpleasant, interesting-uninteresting, exciting-boring, relaxing-distressing, safe-fearful, and active-inactive, were used to assess patients' impression. A total of 75 patients participated, including 42 for Experiment 1 and 33 for Experiment 2. 27 representative color samples were designed in Experiment 1, and the color sample (L = 75, a = 0, b = -60) was the most preferred one. In Experiment 2, this color sample was set as the 'central point', and three color attributes were optimized to maximize the patients' satisfaction. The experimental results show that the proposed method can get the optimal solution for color design of a counseling room.

  13. Optimal Color Design of Psychological Counseling Room by Design of Experiments and Response Surface Methodology

    PubMed Central

    Chen, Hua; Ye, Chenyu

    2014-01-01

    Color is one of the most powerful aspects of a psychological counseling environment. Little scientific research has been conducted on color design and much of the existing literature is based on observational studies. Using design of experiments and response surface methodology, this paper proposes an optimal color design approach for transforming patients’ perception into color elements. Six indices, pleasant-unpleasant, interesting-uninteresting, exciting-boring, relaxing-distressing, safe-fearful, and active-inactive, were used to assess patients’ impression. A total of 75 patients participated, including 42 for Experiment 1 and 33 for Experiment 2. 27 representative color samples were designed in Experiment 1, and the color sample (L = 75, a = 0, b = -60) was the most preferred one. In Experiment 2, this color sample was set as the ‘central point’, and three color attributes were optimized to maximize the patients’ satisfaction. The experimental results show that the proposed method can get the optimal solution for color design of a counseling room. PMID:24594683

  14. 75 FR 43172 - Maternal, Infant, and Early Childhood Home Visiting Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-23

    ... the evaluation results have been published in a peer-reviewed journal; or (bb) quasi-experimental... design (i.e. randomized controlled trial [RCT] or quasi-experimental design [QED]), level of attrition... a quasi-experimental design as a study design in which sample members are selected for the program...

  15. Improved variance estimation of classification performance via reduction of bias caused by small sample size.

    PubMed

    Wickenberg-Bolin, Ulrika; Göransson, Hanna; Fryknäs, Mårten; Gustafsson, Mats G; Isaksson, Anders

    2006-03-13

    Supervised learning for classification of cancer employs a set of design examples to learn how to discriminate between tumors. In practice it is crucial to confirm that the classifier is robust with good generalization performance to new examples, or at least that it performs better than random guessing. A suggested alternative is to obtain a confidence interval of the error rate using repeated design and test sets selected from available examples. However, it is known that even in the ideal situation of repeated designs and tests with completely novel samples in each cycle, a small test set size leads to a large bias in the estimate of the true variance between design sets. Therefore different methods for small sample performance estimation such as a recently proposed procedure called Repeated Random Sampling (RSS) is also expected to result in heavily biased estimates, which in turn translates into biased confidence intervals. Here we explore such biases and develop a refined algorithm called Repeated Independent Design and Test (RIDT). Our simulations reveal that repeated designs and tests based on resampling in a fixed bag of samples yield a biased variance estimate. We also demonstrate that it is possible to obtain an improved variance estimate by means of a procedure that explicitly models how this bias depends on the number of samples used for testing. For the special case of repeated designs and tests using new samples for each design and test, we present an exact analytical expression for how the expected value of the bias decreases with the size of the test set. We show that via modeling and subsequent reduction of the small sample bias, it is possible to obtain an improved estimate of the variance of classifier performance between design sets. However, the uncertainty of the variance estimate is large in the simulations performed indicating that the method in its present form cannot be directly applied to small data sets.

  16. Sampling design and procedures for fixed surface-water sites in the Georgia-Florida coastal plain study unit, 1993

    USGS Publications Warehouse

    Hatzell, H.H.; Oaksford, E.T.; Asbury, C.E.

    1995-01-01

    The implementation of design guidelines for the National Water-Quality Assessment (NAWQA) Program has resulted in the development of new sampling procedures and the modification of existing procedures commonly used in the Water Resources Division of the U.S. Geological Survey. The Georgia-Florida Coastal Plain (GAFL) study unit began the intensive data collection phase of the program in October 1992. This report documents the implementation of the NAWQA guidelines by describing the sampling design and procedures for collecting surface-water samples in the GAFL study unit in 1993. This documentation is provided for agencies that use water-quality data and for future study units that will be entering the intensive phase of data collection. The sampling design is intended to account for large- and small-scale spatial variations, and temporal variations in water quality for the study area. Nine fixed sites were selected in drainage basins of different sizes and different land-use characteristics located in different land-resource provinces. Each of the nine fixed sites was sampled regularly for a combination of six constituent groups composed of physical and chemical constituents: field measurements, major ions and metals, nutrients, organic carbon, pesticides, and suspended sediments. Some sites were also sampled during high-flow conditions and storm events. Discussion of the sampling procedure is divided into three phases: sample collection, sample splitting, and sample processing. A cone splitter was used to split water samples for the analysis of the sampling constituent groups except organic carbon from approximately nine liters of stream water collected at four fixed sites that were sampled intensively. An example of the sample splitting schemes designed to provide the sample volumes required for each sample constituent group is described in detail. Information about onsite sample processing has been organized into a flowchart that describes a pathway for each of the constituent groups.

  17. Liquid chromatography tandem-mass spectrometry (LC-MS/MS) and dried blood spot sampling applied to pharmacokinetics studies in animals: Correlation of classic and block design.

    PubMed

    Baldo, Matías N; Angeli, Emmanuel; Gareis, Natalia C; Hunzicker, Gabriel A; Murguía, Marcelo C; Ortega, Hugo H; Hein, Gustavo J

    2018-04-01

    A relative bioavailability study (RBA) of two phenytoin (PHT) formulations was conducted in rabbits, in order to compare the results obtained from different matrices (plasma and blood from dried blood spot (DBS) sampling) and different experimental designs (classic and block). The method was developed by liquid chromatography tandem-mass spectrometry (LC-MS/MS) in plasma and blood samples. The different sample preparation techniques, plasma protein precipitation and DBS, were validated according to international requirements. The analytical method was validated with ranges 0.20-50.80 and 0.12-20.32 µg ml -1 , r > 0.999 for plasma and blood, respectively. Accuracy and precision were within acceptance criteria for bioanalytical assay validation (< 15 for bias and CV% and < 20 for limit of quantification (LOQ)). PHT showed long-term stability, both for plasma and blood, and under refrigerated and room temperature conditions. Haematocrit values were measured during the validation process and RBA study. Finally, the pharmacokinetic parameters (C max , T max and AUC 0-t ) obtained from the RBA study were tested. Results were highly comparable for matrices and experimental designs. A matrix correlation higher than 0.975 and a ratio of (PHT blood) = 1.158 (PHT plasma) were obtained. The results obtained herein show that the use of classic experimental design and DBS sampling for animal pharmacokinetic studies should be encouraged as they could help to prevent the use of a large number of animals and also animal euthanasia. Finally, the combination of DBS sampling with LC-MS/MS technology showed to be an excellent tool not only for therapeutic drug monitoring but also for RBA studies.

  18. Capillary pumping independent of the liquid surface energy and viscosity

    NASA Astrophysics Data System (ADS)

    Guo, Weijin; Hansson, Jonas; van der Wijngaart, Wouter

    2018-03-01

    Capillary pumping is an attractive means of liquid actuation because it is a passive mechanism, i.e., it does not rely on an external energy supply during operation. The capillary flow rate generally depends on the liquid sample viscosity and surface energy. This poses a problem for capillary-driven systems that rely on a predictable flow rate and for which the sample viscosity or surface energy are not precisely known. Here, we introduce the capillary pumping of sample liquids with a flow rate that is constant in time and independent of the sample viscosity and sample surface energy. These features are enabled by a design in which a well-characterized pump liquid is capillarily imbibed into the downstream section of the pump and thereby pulls the unknown sample liquid into the upstream pump section. The downstream pump geometry is designed to exert a Laplace pressure and fluidic resistance that are substantially larger than those exerted by the upstream pump geometry on the sample liquid. Hence, the influence of the unknown sample liquid on the flow rate is negligible. We experimentally tested pumps of the new design with a variety of sample liquids, including water, different samples of whole blood, different samples of urine, isopropanol, mineral oil, and glycerol. The capillary filling speeds of these liquids vary by more than a factor 1000 when imbibed to a standard constant cross-section glass capillary. In our new pump design, 20 filling tests involving these liquid samples with vastly different properties resulted in a constant volumetric flow rate in the range of 20.96-24.76 μL/min. We expect this novel capillary design to have immediate applications in lab-on-a-chip systems and diagnostic devices.

  19. Hybrid real-code ant colony optimisation for constrained mechanical design

    NASA Astrophysics Data System (ADS)

    Pholdee, Nantiwat; Bureerat, Sujin

    2016-01-01

    This paper proposes a hybrid meta-heuristic based on integrating a local search simplex downhill (SDH) method into the search procedure of real-code ant colony optimisation (ACOR). This hybridisation leads to five hybrid algorithms where a Monte Carlo technique, a Latin hypercube sampling technique (LHS) and a translational propagation Latin hypercube design (TPLHD) algorithm are used to generate an initial population. Also, two numerical schemes for selecting an initial simplex are investigated. The original ACOR and its hybrid versions along with a variety of established meta-heuristics are implemented to solve 17 constrained test problems where a fuzzy set theory penalty function technique is used to handle design constraints. The comparative results show that the hybrid algorithms are the top performers. Using the TPLHD technique gives better results than the other sampling techniques. The hybrid optimisers are a powerful design tool for constrained mechanical design problems.

  20. Planning Considerations for a Mars Sample Receiving Facility: Summary and Interpretation of Three Design Studies

    NASA Astrophysics Data System (ADS)

    Beaty, David W.; Allen, Carlton C.; Bass, Deborah S.; Buxbaum, Karen L.; Campbell, James K.; Lindstrom, David J.; Miller, Sylvia L.; Papanastassiou, Dimitri A.

    2009-10-01

    It has been widely understood for many years that an essential component of a Mars Sample Return mission is a Sample Receiving Facility (SRF). The purpose of such a facility would be to take delivery of the flight hardware that lands on Earth, open the spacecraft and extract the sample container and samples, and conduct an agreed-upon test protocol, while ensuring strict containment and contamination control of the samples while in the SRF. Any samples that are found to be non-hazardous (or are rendered non-hazardous by sterilization) would then be transferred to long-term curation. Although the general concept of an SRF is relatively straightforward, there has been considerable discussion about implementation planning. The Mars Exploration Program carried out an analysis of the attributes of an SRF to establish its scope, including minimum size and functionality, budgetary requirements (capital cost, operating costs, cost profile), and development schedule. The approach was to arrange for three independent design studies, each led by an architectural design firm, and compare the results. While there were many design elements in common identified by each study team, there were significant differences in the way human operators were to interact with the systems. In aggregate, the design studies provided insight into the attributes of a future SRF and the complex factors to consider for future programmatic planning.

  1. Planning considerations for a Mars Sample Receiving Facility: summary and interpretation of three design studies.

    PubMed

    Beaty, David W; Allen, Carlton C; Bass, Deborah S; Buxbaum, Karen L; Campbell, James K; Lindstrom, David J; Miller, Sylvia L; Papanastassiou, Dimitri A

    2009-10-01

    It has been widely understood for many years that an essential component of a Mars Sample Return mission is a Sample Receiving Facility (SRF). The purpose of such a facility would be to take delivery of the flight hardware that lands on Earth, open the spacecraft and extract the sample container and samples, and conduct an agreed-upon test protocol, while ensuring strict containment and contamination control of the samples while in the SRF. Any samples that are found to be non-hazardous (or are rendered non-hazardous by sterilization) would then be transferred to long-term curation. Although the general concept of an SRF is relatively straightforward, there has been considerable discussion about implementation planning. The Mars Exploration Program carried out an analysis of the attributes of an SRF to establish its scope, including minimum size and functionality, budgetary requirements (capital cost, operating costs, cost profile), and development schedule. The approach was to arrange for three independent design studies, each led by an architectural design firm, and compare the results. While there were many design elements in common identified by each study team, there were significant differences in the way human operators were to interact with the systems. In aggregate, the design studies provided insight into the attributes of a future SRF and the complex factors to consider for future programmatic planning.

  2. Design Effects and Generalized Variance Functions for the 1990-91 Schools and Staffing Survey (SASS). Volume II. Technical Report.

    ERIC Educational Resources Information Center

    Salvucci, Sameena; And Others

    This technical report provides the results of a study on the calculation and use of generalized variance functions (GVFs) and design effects for the 1990-91 Schools and Staffing Survey (SASS). The SASS is a periodic integrated system of sample surveys conducted by the National Center for Education Statistics (NCES) that produces sampling variances…

  3. Use of CFD for static sampling hood design: An example for methane flux assessment on landfill surfaces.

    PubMed

    Lucernoni, Federico; Rizzotto, Matteo; Tapparo, Federica; Capelli, Laura; Sironi, Selena; Busini, Valentina

    2016-11-01

    The work focuses on the principles for the design of a specific static hood and on the definition of an optimal sampling procedure for the assessment of landfill gas (LFG) surface emissions. This is carried out by means of computational fluid dynamics (CFD) simulations to investigate the fluid dynamics conditions of the hood. The study proves that understanding the fluid dynamic conditions is fundamental in order to understand the sampling results and correctly interpret the measured concentration values by relating them to a suitable LFG emission model, and therefore to estimate emission rates. For this reason, CFD is a useful tool for the design and evaluation of sampling systems, among others, to verify the fundamental hypotheses on which the mass balance for the sampling hood is defined. The procedure here discussed, which is specific for the case of the investigated landfill, can be generalized to be applied also to different scenarios, where hood sampling is involved. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Repeat sample intraocular pressure variance in induced and naturally ocular hypertensive monkeys.

    PubMed

    Dawson, William W; Dawson, Judyth C; Hope, George M; Brooks, Dennis E; Percicot, Christine L

    2005-12-01

    To compare repeat-sample means variance of laser induced ocular hypertension (OH) in rhesus monkeys with the repeat-sample mean variance of natural OH in age-range matched monkeys of similar and dissimilar pedigrees. Multiple monocular, retrospective, intraocular pressure (IOP) measures were recorded repeatedly during a short sampling interval (SSI, 1-5 months) and a long sampling interval (LSI, 6-36 months). There were 5-13 eyes in each SSI and LSI subgroup. Each interval contained subgroups from the Florida with natural hypertension (NHT), induced hypertension (IHT1) Florida monkeys, unrelated (Strasbourg, France) induced hypertensives (IHT2), and Florida age-range matched controls (C). Repeat-sample individual variance means and related IOPs were analyzed by a parametric analysis of variance (ANOV) and results compared to non-parametric Kruskal-Wallis ANOV. As designed, all group intraocular pressure distributions were significantly different (P < or = 0.009) except for the two (Florida/Strasbourg) induced OH groups. A parametric 2 x 4 design ANOV for mean variance showed large significant effects due to treatment group and sampling interval. Similar results were produced by the nonparametric ANOV. Induced OH sample variance (LSI) was 43x the natural OH sample variance-mean. The same relationship for the SSI was 12x. Laser induced ocular hypertension in rhesus monkeys produces large IOP repeat-sample variance mean results compared to controls and natural OH.

  5. Modeling the Sensitivity of Field Surveys for Detection of Environmental DNA (eDNA)

    PubMed Central

    Schultz, Martin T.; Lance, Richard F.

    2015-01-01

    The environmental DNA (eDNA) method is the practice of collecting environmental samples and analyzing them for the presence of a genetic marker specific to a target species. Little is known about the sensitivity of the eDNA method. Sensitivity is the probability that the target marker will be detected if it is present in the water body. Methods and tools are needed to assess the sensitivity of sampling protocols, design eDNA surveys, and interpret survey results. In this study, the sensitivity of the eDNA method is modeled as a function of ambient target marker concentration. The model accounts for five steps of sample collection and analysis, including: 1) collection of a filtered water sample from the source; 2) extraction of DNA from the filter and isolation in a purified elution; 3) removal of aliquots from the elution for use in the polymerase chain reaction (PCR) assay; 4) PCR; and 5) genetic sequencing. The model is applicable to any target species. For demonstration purposes, the model is parameterized for bighead carp (Hypophthalmichthys nobilis) and silver carp (H. molitrix) assuming sampling protocols used in the Chicago Area Waterway System (CAWS). Simulation results show that eDNA surveys have a high false negative rate at low concentrations of the genetic marker. This is attributed to processing of water samples and division of the extraction elution in preparation for the PCR assay. Increases in field survey sensitivity can be achieved by increasing sample volume, sample number, and PCR replicates. Increasing sample volume yields the greatest increase in sensitivity. It is recommended that investigators estimate and communicate the sensitivity of eDNA surveys to help facilitate interpretation of eDNA survey results. In the absence of such information, it is difficult to evaluate the results of surveys in which no water samples test positive for the target marker. It is also recommended that invasive species managers articulate concentration-based sensitivity objectives for eDNA surveys. In the absence of such information, it is difficult to design appropriate sampling protocols. The model provides insights into how sampling protocols can be designed or modified to achieve these sensitivity objectives. PMID:26509674

  6. Modeling the Sensitivity of Field Surveys for Detection of Environmental DNA (eDNA).

    PubMed

    Schultz, Martin T; Lance, Richard F

    2015-01-01

    The environmental DNA (eDNA) method is the practice of collecting environmental samples and analyzing them for the presence of a genetic marker specific to a target species. Little is known about the sensitivity of the eDNA method. Sensitivity is the probability that the target marker will be detected if it is present in the water body. Methods and tools are needed to assess the sensitivity of sampling protocols, design eDNA surveys, and interpret survey results. In this study, the sensitivity of the eDNA method is modeled as a function of ambient target marker concentration. The model accounts for five steps of sample collection and analysis, including: 1) collection of a filtered water sample from the source; 2) extraction of DNA from the filter and isolation in a purified elution; 3) removal of aliquots from the elution for use in the polymerase chain reaction (PCR) assay; 4) PCR; and 5) genetic sequencing. The model is applicable to any target species. For demonstration purposes, the model is parameterized for bighead carp (Hypophthalmichthys nobilis) and silver carp (H. molitrix) assuming sampling protocols used in the Chicago Area Waterway System (CAWS). Simulation results show that eDNA surveys have a high false negative rate at low concentrations of the genetic marker. This is attributed to processing of water samples and division of the extraction elution in preparation for the PCR assay. Increases in field survey sensitivity can be achieved by increasing sample volume, sample number, and PCR replicates. Increasing sample volume yields the greatest increase in sensitivity. It is recommended that investigators estimate and communicate the sensitivity of eDNA surveys to help facilitate interpretation of eDNA survey results. In the absence of such information, it is difficult to evaluate the results of surveys in which no water samples test positive for the target marker. It is also recommended that invasive species managers articulate concentration-based sensitivity objectives for eDNA surveys. In the absence of such information, it is difficult to design appropriate sampling protocols. The model provides insights into how sampling protocols can be designed or modified to achieve these sensitivity objectives.

  7. Mixing modes in a population-based interview survey: comparison of a sequential and a concurrent mixed-mode design for public health research.

    PubMed

    Mauz, Elvira; von der Lippe, Elena; Allen, Jennifer; Schilling, Ralph; Müters, Stephan; Hoebel, Jens; Schmich, Patrick; Wetzstein, Matthias; Kamtsiuris, Panagiotis; Lange, Cornelia

    2018-01-01

    Population-based surveys currently face the problem of decreasing response rates. Mixed-mode designs are now being implemented more often to account for this, to improve sample composition and to reduce overall costs. This study examines whether a concurrent or sequential mixed-mode design achieves better results on a number of indicators of survey quality. Data were obtained from a population-based health interview survey of adults in Germany that was conducted as a methodological pilot study as part of the German Health Update (GEDA). Participants were randomly allocated to one of two surveys; each of the surveys had a different design. In the concurrent mixed-mode design ( n  = 617) two types of self-administered questionnaires (SAQ-Web and SAQ-Paper) and computer-assisted telephone interviewing were offered simultaneously to the respondents along with the invitation to participate. In the sequential mixed-mode design ( n  = 561), SAQ-Web was initially provided, followed by SAQ-Paper, with an option for a telephone interview being sent out together with the reminders at a later date. Finally, this study compared the response rates, sample composition, health indicators, item non-response, the scope of fieldwork and the costs of both designs. No systematic differences were identified between the two mixed-mode designs in terms of response rates, the socio-demographic characteristics of the achieved samples, or the prevalence rates of the health indicators under study. The sequential design gained a higher rate of online respondents. Very few telephone interviews were conducted for either design. With regard to data quality, the sequential design (which had more online respondents) showed less item non-response. There were minor differences between the designs in terms of their costs. Postage and printing costs were lower in the concurrent design, but labour costs were lower in the sequential design. No differences in health indicators were found between the two designs. Modelling these results for higher response rates and larger net sample sizes indicated that the sequential design was more cost and time-effective. This study contributes to the research available on implementing mixed-mode designs as part of public health surveys. Our findings show that SAQ-Paper and SAQ-Web questionnaires can be combined effectively. Sequential mixed-mode designs with higher rates of online respondents may be of greater benefit to studies with larger net sample sizes than concurrent mixed-mode designs.

  8. CONCENTRATIONS OF PESTICIDE FROM DERMAL SURFACES: A COMPARISON OF NHEXAS & AZ BORDER SAMPLES

    EPA Science Inventory

    NHEXAS-AZ was a statewide survey designed to gather data on the distributions of exposure from various media. Results of intensive sampling were obtained from 179 homes. Border-AZ was a similar study focusing on homes within 40 km of the Arizona-Mexico Border; similar results...

  9. Developing the design of a continuous national health survey for New Zealand

    PubMed Central

    2013-01-01

    Background A continuously operating survey can yield advantages in survey management, field operations, and the provision of timely information for policymakers and researchers. We describe the key features of the sample design of the New Zealand (NZ) Health Survey, which has been conducted on a continuous basis since mid-2011, and compare to a number of other national population health surveys. Methods A number of strategies to improve the NZ Health Survey are described: implementation of a targeted dual-frame sample design for better Māori, Pacific, and Asian statistics; movement from periodic to continuous operation; use of core questions with rotating topic modules to improve flexibility in survey content; and opportunities for ongoing improvements and efficiencies, including linkage to administrative datasets. Results and discussion The use of disproportionate area sampling and a dual frame design resulted in reductions of approximately 19%, 26%, and 4% to variances of Māori, Pacific and Asian statistics respectively, but at the cost of a 17% increase to all-ethnicity variances. These were broadly in line with the survey’s priorities. Respondents provided a high degree of cooperation in the first year, with an adult response rate of 79% and consent rates for data linkage above 90%. Conclusions A combination of strategies tailored to local conditions gives the best results for national health surveys. In the NZ context, data from the NZ Census of Population and Dwellings and the Electoral Roll can be used to improve the sample design. A continuously operating survey provides both administrative and statistical advantages. PMID:24364838

  10. 40 CFR 258.53 - Ground-water sampling and analysis requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... include consistent sampling and analysis procedures that are designed to ensure monitoring results that... testing period. If a multiple comparisons procedure is used, the Type I experiment wise error rate for...

  11. RANKED SET SAMPLING FOR ECOLOGICAL RESEARCH: ACCOUNTING FOR THE TOTAL COSTS OF SAMPLING

    EPA Science Inventory

    Researchers aim to design environmental studies that optimize precision and allow for generalization of results, while keeping the costs of associated field and laboratory work at a reasonable level. Ranked set sampling is one method to potentially increase precision and reduce ...

  12. Experimental design for three-color and four-color gene expression microarrays.

    PubMed

    Woo, Yong; Krueger, Winfried; Kaur, Anupinder; Churchill, Gary

    2005-06-01

    Three-color microarrays, compared with two-color microarrays, can increase design efficiency and power to detect differential expression without additional samples and arrays. Furthermore, three-color microarray technology is currently available at a reasonable cost. Despite the potential advantages, clear guidelines for designing and analyzing three-color experiments do not exist. We propose a three- and a four-color cyclic design (loop) and a complementary graphical representation to help design experiments that are balanced, efficient and robust to hybridization failures. In theory, three-color loop designs are more efficient than two-color loop designs. Experiments using both two- and three-color platforms were performed in parallel and their outputs were analyzed using linear mixed model analysis in R/MAANOVA. These results demonstrate that three-color experiments using the same number of samples (and fewer arrays) will perform as efficiently as two-color experiments. The improved efficiency of the design is somewhat offset by a reduced dynamic range and increased variability in the three-color experimental system. This result suggests that, with minor technological improvements, three-color microarrays using loop designs could detect differential expression more efficiently than two-color loop designs. http://www.jax.org/staff/churchill/labsite/software Multicolor cyclic design construction methods and examples along with additional results of the experiment are provided at http://www.jax.org/staff/churchill/labsite/pubs/yong.

  13. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization.

    PubMed

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between logistical constraints and additional sampling performance should be carefully evaluated.

  14. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    PubMed Central

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between logistical constraints and additional sampling performance should be carefully evaluated. PMID:28334046

  15. The complexity of personality: advantages of a genetically sensitive multi-group design.

    PubMed

    Hahn, Elisabeth; Spinath, Frank M; Siedler, Thomas; Wagner, Gert G; Schupp, Jürgen; Kandler, Christian

    2012-03-01

    Findings from many behavioral genetic studies utilizing the classical twin design suggest that genetic and non-shared environmental effects play a significant role in human personality traits. This study focuses on the methodological advantages of extending the sampling frame to include multiple dyads of relatives. We investigated the sensitivity of heritability estimates to the inclusion of sibling pairs, mother-child pairs and grandparent-grandchild pairs from the German Socio-Economic Panel Study in addition to a classical German twin sample consisting of monozygotic- and dizygotic twins. The resulting dataset contained 1.308 pairs, including 202 monozygotic and 147 dizygotic twin pairs, along with 419 sibling pairs, 438 mother-child dyads, and 102 grandparent-child dyads. This genetically sensitive multi-group design allowed the simultaneous testing of additive and non-additive genetic, common and specific environmental effects, including cultural transmission and twin-specific environmental influences. Using manifest and latent modeling of phenotypes (i.e., controlling for measurement error), we compare results from the extended sample with those from the twin sample alone and discuss implications for future research.

  16. Design tradeoffs for trend assessment in aquatic biological monitoring programs

    USGS Publications Warehouse

    Gurtz, Martin E.; Van Sickle, John; Carlisle, Daren M.; Paulsen, Steven G.

    2013-01-01

    Assessments of long-term (multiyear) temporal trends in biological monitoring programs are generally undertaken without an adequate understanding of the temporal variability of biological communities. When the sources and levels of variability are unknown, managers cannot make informed choices in sampling design to achieve monitoring goals in a cost-effective manner. We evaluated different trend sampling designs by estimating components of both short- and long-term variability in biological indicators of water quality in streams. Invertebrate samples were collected from 32 sites—9 urban, 6 agricultural, and 17 relatively undisturbed (reference) streams—distributed throughout the United States. Between 5 and 12 yearly samples were collected at each site during the period 1993–2008, plus 2 samples within a 10-week index period during either 2007 or 2008. These data allowed calculation of four sources of variance for invertebrate indicators: among sites, among years within sites, interaction among sites and years (site-specific annual variation), and among samples collected within an index period at a site (residual). When estimates of these variance components are known, changes to sampling design can be made to improve trend detection. Design modifications that result in the ability to detect the smallest trend with the fewest samples are, from most to least effective: (1) increasing the number of years in the sampling period (duration of the monitoring program), (2) decreasing the interval between samples, and (3) increasing the number of repeat-visit samples per year (within an index period). This order of improvement in trend detection, which achieves the greatest gain for the fewest samples, is the same whether trends are assessed at an individual site or an average trend of multiple sites. In multiple-site surveys, increasing the number of sites has an effect similar to that of decreasing the sampling interval; the benefit of adding sites is greater when a new set of different sites is selected for each sampling effort than when the same sites are sampled each time. Understanding variance components of the ecological attributes of interest can lead to more cost-effective monitoring designs to detect trends.

  17. Measurement of forces applied to handgrips and pedals for a sample population of Mexican males.

    PubMed

    Lara-Lopez, A; Aguilera-Cortes, L A; Barbosa-Castillo, F

    1999-04-01

    Equipment design requirements for newly industrializing nations often differ from those of highly industrialized nations. In order to develop a 'culturally relevant' technology in Mexico, this paper reports the results of a study, conducted in Guanajuato state, designed to measure the maximum static forces exerted on pulling handgrips and pedals by seated male subjects. The project included the design and construction of an adjustable measuring apparatus. Handgrip measurements were taken with left and right arms at five different elbow angles; pedal measurements with left and right legs at three different knee angles. The arm data indicate that the relationship between appendage angle and force is similar for these data and those previously reported for a US sample, although there are some significant differences in magnitude. Implications of these results for machinery design are discussed.

  18. Early Results and Spaceflight Implications of the SWAB Flight Experiment

    NASA Technical Reports Server (NTRS)

    Ott, C. Mark; Pierson, Duane L.

    2007-01-01

    Microbial monitoring of spacecraft environments provides key information in the assessment of infectious disease risk to the crew. Monitoring aboard the Mir space station and International Space Station (ISS) has provided a tremendous informational baseline to aid in determining the types and concentrations of microorganisms during a mission. Still, current microbial monitoring hardware utilizes culture-based methodology which may not detect many medically significant organisms, such as Legionella pneumophila. We hypothesize that evaluation of the ISS environment using non-culture-based technologies would reveal microorganisms not previously reported in spacecraft, allowing for a more complete health assessment. To achieve this goal, a spaceflight experiment, operationally designated as SWAB, was designed to evaluate the DNA from environmental samples collected from ISS and vehicles destined for ISS. Results from initial samples indicate that the sample collection and return procedures were successful. Analysis of these samples using denaturing gradient gel electrophoresis and targeted PCR primers for fungal contaminants is underway. The current results of SWAB and their implication for in-flight molecular analysis of environmental samples will be discussed.

  19. A Passive Earth-Entry Capsule for Mars Sample Return

    NASA Technical Reports Server (NTRS)

    Mitcheltree, Robert A.; Kellas, Sotiris

    1999-01-01

    A combination of aerodynamic analysis and testing, aerothermodynamic analysis, structural analysis and testing, impact analysis and testing, thermal analysis, ground characterization tests, configuration packaging, and trajectory simulation are employed to determine the feasibility of an entirely passive Earth entry capsule for the Mars Sample Return mission. The design circumvents the potential failure modes of a parachute terminal descent system by replacing that system with passive energy absorbing material to cushion the Mars samples during ground impact. The suggested design utilizes a spherically blunted 45-degree half-angle cone forebody with an ablative heat shield. The primary structure is a hemispherical, composite sandwich enclosing carbon foam energy absorbing material. Though no demonstration test of the entire system is included, results of the tests and analysis presented indicate that the design is a viable option for the Mars Sample Return Mission.

  20. Prediction of reinforced concrete strength by ultrasonic velocities

    NASA Astrophysics Data System (ADS)

    Sabbağ, Nevbahar; Uyanık, Osman

    2017-06-01

    This study was aimed to determine the strength of the reinforced concrete and to reveal the reinforcement effect on the concrete strength by Ultrasonic P and S wave velocities. Studies were conducted with prepared 9 different concrete designs of showing low, medium and high strength features. 4 kinds of cubic samples which unreinforced and including 10, 14 or 20 mm diameter reinforcement were prepared for these designs. Studies were carried out on total 324 samples including 9 samples for each design of these 4 kinds. The prepared samples of these designs were subjected to water curing. On some days of the 90-day period, P and S wave measurements were repeated to reveal the changes in seismic velocities of samples depending on whether reinforced or unreinforced of samples and diameter of reinforcement. Besides, comparisons were done by performing uniaxial compressive strength test with crushing of 3 samples on 7th, 28th and 90th days. As a result of studies and evaluations, it was seen that values of seismic velocities and uniaxial compressive strength increased depending on reinforcement and diameter of reinforcement in low strength concretes. However, while the seismic velocities were not markedly affected from reinforcement or reinforcement diameter in high strength concrete, uniaxial compressive strength values were negatively affected.

  1. Design of a portable electronic nose for real-fake detection of liquors

    NASA Astrophysics Data System (ADS)

    Qi, Pei-Feng; Zeng, Ming; Li, Zhi-Hua; Sun, Biao; Meng, Qing-Hao

    2017-09-01

    Portability is a major issue that influences the practical application of electronic noses (e-noses). For liquors detection, an e-nose must preprocess the liquid samples (e.g., using evaporation and thermal desorption), which makes the portable design even more difficult. To realize convenient and rapid detection of liquors, we designed a portable e-nose platform that consists of hardware and software systems. The hardware system contains an evaporation/sampling module, a reaction module, a control/data acquisition and analysis module, and a power module. The software system provides a user-friendly interface and can achieve automatic sampling and data processing. This e-nose platform has been applied to the real-fake recognition of Chinese liquors. Through parameter optimization of a one-class support vector machine classifier, the error rate of the negative samples is greatly reduced, and the overall recognition accuracy is improved. The results validated the feasibility of the designed portable e-nose platform.

  2. Automated array assembly, phase 2

    NASA Technical Reports Server (NTRS)

    Carbajal, B. G.

    1979-01-01

    Tasks of scaling up the tandem junction cell (TJC) from 2 cm x 2 cm to 6.2 cm and the assembly of several modules using these large area TJC's are described. The scale-up of the TJC was based on using the existing process and doing the necessary design activities to increase the cell area to an acceptably large area. The design was carried out using available device models. The design was verified and sample large area TJCs were fabricated. Mechanical and process problems occurred causing a schedule slippage that resulted in contract expiration before enough large-area TJCs were fabricated to populate the sample tandem junction modules (TJM). A TJM design was carried out in which the module interconnects served to augment the current collecting buses on the cell. No sample TJMs were assembled due to a shortage of large-area TJCs.

  3. Mars Sample Return Architecture Assessment Study

    NASA Astrophysics Data System (ADS)

    Centuori, S.; Hermosín, P.; Martín, J.; De Zaiacomo, G.; Colin, S.; Godfrey, A.; Myles, J.; Johnson, H.; Sachdev, T.; Ahmed, R.

    2018-04-01

    Current paper presents the results of ESA funded activity "Mars Sample Return Architecture Assessment Study" carried-out by DEIMOS Space, Lockheed Martin UK Ampthill, and MDA Corporation, where more than 500 mission design options have been studied.

  4. (PRESENTED NAQC SAN FRANCISCO, CA) COARSE PM METHODS STUDY: STUDY DESIGN AND RESULTS

    EPA Science Inventory

    Comprehensive field studies were conducted to evaluate the performance of sampling methods for measuring the coarse fraction of PM10 in ambient air. Five separate sampling approaches were evaluated at each of three sampling sites. As the primary basis of comparison, a discrete ...

  5. Method and apparatus for automated processing and aliquoting of whole blood samples for analysis in a centrifugal fast analyzer

    DOEpatents

    Burtis, C.A.; Johnson, W.F.; Walker, W.A.

    1985-08-05

    A rotor and disc assembly for use in a centrifugal fast analyzer. The assembly is designed to process multiple samples of whole blood followed by aliquoting of the resultant serum into precisely measured samples for subsequent chemical analysis. The assembly requires minimal operator involvement with no mechanical pipetting. The system comprises: (1) a whole blood sample disc; (2) a serum sample disc; (3) a sample preparation rotor; and (4) an analytical rotor. The blood sample disc and serum sample disc are designed with a plurality of precision bore capillary tubes arranged in a spoked array. Samples of blood are loaded into the blood sample disc by capillary action and centrifugally discharged into cavities of the sample preparation rotor where separation of serum and solids is accomplished. The serum is loaded into the capillaries of the serum sample disc by capillary action and subsequently centrifugally expelled into cuvettes of the analyticaly rotor for conventional methods. 5 figs.

  6. Method and apparatus for automated processing and aliquoting of whole blood samples for analysis in a centrifugal fast analyzer

    DOEpatents

    Burtis, Carl A.; Johnson, Wayne F.; Walker, William A.

    1988-01-01

    A rotor and disc assembly for use in a centrifugal fast analyzer. The assembly is designed to process multiple samples of whole blood followed by aliquoting of the resultant serum into precisely measured samples for subsequent chemical analysis. The assembly requires minimal operator involvement with no mechanical pipetting. The system comprises (1) a whole blood sample disc, (2) a serum sample disc, (3) a sample preparation rotor, and (4) an analytical rotor. The blood sample disc and serum sample disc are designed with a plurality of precision bore capillary tubes arranged in a spoked array. Samples of blood are loaded into the blood sample disc in capillary tubes filled by capillary action and centrifugally discharged into cavities of the sample preparation rotor where separation of serum and solids is accomplished. The serum is loaded into the capillaries of the serum sample disc by capillary action and subsequently centrifugally expelled into cuvettes of the analytical rotor for analysis by conventional methods.

  7. 77 FR 2299 - Agency Information Collection Activities; Proposed Collection; Comment Request; Healthcare...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-17

    ... survey is designed to explore the opinions and perceptions of physicians, nurse practitioners, and... assistants. Such a design will help to ensure our ability to discuss not only healthcare professional... well as any noncoverage or undersampling and oversampling resulting from the sample design...

  8. Model-based inference for small area estimation with sampling weights

    PubMed Central

    Vandendijck, Y.; Faes, C.; Kirby, R.S.; Lawson, A.; Hens, N.

    2017-01-01

    Obtaining reliable estimates about health outcomes for areas or domains where only few to no samples are available is the goal of small area estimation (SAE). Often, we rely on health surveys to obtain information about health outcomes. Such surveys are often characterised by a complex design, stratification, and unequal sampling weights as common features. Hierarchical Bayesian models are well recognised in SAE as a spatial smoothing method, but often ignore the sampling weights that reflect the complex sampling design. In this paper, we focus on data obtained from a health survey where the sampling weights of the sampled individuals are the only information available about the design. We develop a predictive model-based approach to estimate the prevalence of a binary outcome for both the sampled and non-sampled individuals, using hierarchical Bayesian models that take into account the sampling weights. A simulation study is carried out to compare the performance of our proposed method with other established methods. The results indicate that our proposed method achieves great reductions in mean squared error when compared with standard approaches. It performs equally well or better when compared with more elaborate methods when there is a relationship between the responses and the sampling weights. The proposed method is applied to estimate asthma prevalence across districts. PMID:28989860

  9. Bionic Design for Mars Sampling Scoop Inspired by Himalayan Marmot Claw

    PubMed Central

    2016-01-01

    Cave animals are often adapted to digging and life underground, with claw toes similar in structure and function to a sampling scoop. In this paper, the clawed toes of the Himalayan marmot were selected as a biological prototype for bionic research. Based on geometric parameter optimization of the clawed toes, a bionic sampling scoop for use on Mars was designed. Using a 3D laser scanner, the point cloud data of the second front claw toe was acquired. Parametric equations and contour curves for the claw were then built with cubic polynomial fitting. We obtained 18 characteristic curve equations for the internal and external contours of the claw. A bionic sampling scoop was designed according to the structural parameters of Curiosity's sampling shovel and the contours of the Himalayan marmot's claw. Verifying test results showed that when the penetration angle was 45° and the sampling speed was 0.33 r/min, the bionic sampling scoops' resistance torque was 49.6% less than that of the prototype sampling scoop. When the penetration angle was 60° and the sampling speed was 0.22 r/min, the resistance torque of the bionic sampling scoop was 28.8% lower than that of the prototype sampling scoop. PMID:28127229

  10. Urban Land Cover Mapping Accuracy Assessment - A Cost-benefit Analysis Approach

    NASA Astrophysics Data System (ADS)

    Xiao, T.

    2012-12-01

    One of the most important components in urban land cover mapping is mapping accuracy assessment. Many statistical models have been developed to help design simple schemes based on both accuracy and confidence levels. It is intuitive that an increased number of samples increases the accuracy as well as the cost of an assessment. Understanding cost and sampling size is crucial in implementing efficient and effective of field data collection. Few studies have included a cost calculation component as part of the assessment. In this study, a cost-benefit sampling analysis model was created by combining sample size design and sampling cost calculation. The sampling cost included transportation cost, field data collection cost, and laboratory data analysis cost. Simple Random Sampling (SRS) and Modified Systematic Sampling (MSS) methods were used to design sample locations and to extract land cover data in ArcGIS. High resolution land cover data layers of Denver, CO and Sacramento, CA, street networks, and parcel GIS data layers were used in this study to test and verify the model. The relationship between the cost and accuracy was used to determine the effectiveness of each sample method. The results of this study can be applied to other environmental studies that require spatial sampling.

  11. An information system design for watershed-wide modeling of water loss to the atmosphere using remote sensing techniques

    NASA Technical Reports Server (NTRS)

    Khorram, S.

    1977-01-01

    Results are presented of a study intended to develop a general location-specific remote-sensing procedure for watershed-wide estimation of water loss to the atmosphere by evaporation and transpiration. The general approach involves a stepwise sequence of required information definition (input data), appropriate sample design, mathematical modeling, and evaluation of results. More specifically, the remote sensing-aided system developed to evaluate evapotranspiration employs a basic two-stage two-phase sample of three information resolution levels. Based on the discussed design, documentation, and feasibility analysis to yield timely, relatively accurate, and cost-effective evapotranspiration estimates on a watershed or subwatershed basis, work is now proceeding to implement this remote sensing-aided system.

  12. Objective sampling design in a highly heterogeneous landscape - characterizing environmental determinants of malaria vector distribution in French Guiana, in the Amazonian region

    PubMed Central

    2013-01-01

    Background Sampling design is a key issue when establishing species inventories and characterizing habitats within highly heterogeneous landscapes. Sampling efforts in such environments may be constrained and many field studies only rely on subjective and/or qualitative approaches to design collection strategy. The region of Cacao, in French Guiana, provides an excellent study site to understand the presence and abundance of Anopheles mosquitoes, their species dynamics and the transmission risk of malaria across various environments. We propose an objective methodology to define a stratified sampling design. Following thorough environmental characterization, a factorial analysis of mixed groups allows the data to be reduced and non-collinear principal components to be identified while balancing the influences of the different environmental factors. Such components defined new variables which could then be used in a robust k-means clustering procedure. Then, we identified five clusters that corresponded to our sampling strata and selected sampling sites in each stratum. Results We validated our method by comparing the species overlap of entomological collections from selected sites and the environmental similarities of the same sites. The Morisita index was significantly correlated (Pearson linear correlation) with environmental similarity based on i) the balanced environmental variable groups considered jointly (p = 0.001) and ii) land cover/use (p-value << 0.001). The Jaccard index was significantly correlated with land cover/use-based environmental similarity (p-value = 0.001). Conclusions The results validate our sampling approach. Land cover/use maps (based on high spatial resolution satellite images) were shown to be particularly useful when studying the presence, density and diversity of Anopheles mosquitoes at local scales and in very heterogeneous landscapes. PMID:24289184

  13. A Study Investigating Indian Middle School Students' Ideas of Design and Designers

    ERIC Educational Resources Information Center

    Ara, Farhat; Chunawala, Sugra; Natarajan, Chitra

    2011-01-01

    This paper reports on an investigation into middle school students' naive ideas about, and attitudes towards design and designers. The sample for the survey consisted of students from Classes 7 to 9 from a school located in Mumbai. The data were analysed qualitatively and quantitatively to look for trends in students' responses. Results show that…

  14. Improving power and robustness for detecting genetic association with extreme-value sampling design.

    PubMed

    Chen, Hua Yun; Li, Mingyao

    2011-12-01

    Extreme-value sampling design that samples subjects with extremely large or small quantitative trait values is commonly used in genetic association studies. Samples in such designs are often treated as "cases" and "controls" and analyzed using logistic regression. Such a case-control analysis ignores the potential dose-response relationship between the quantitative trait and the underlying trait locus and thus may lead to loss of power in detecting genetic association. An alternative approach to analyzing such data is to model the dose-response relationship by a linear regression model. However, parameter estimation from this model can be biased, which may lead to inflated type I errors. We propose a robust and efficient approach that takes into consideration of both the biased sampling design and the potential dose-response relationship. Extensive simulations demonstrate that the proposed method is more powerful than the traditional logistic regression analysis and is more robust than the linear regression analysis. We applied our method to the analysis of a candidate gene association study on high-density lipoprotein cholesterol (HDL-C) which includes study subjects with extremely high or low HDL-C levels. Using our method, we identified several SNPs showing a stronger evidence of association with HDL-C than the traditional case-control logistic regression analysis. Our results suggest that it is important to appropriately model the quantitative traits and to adjust for the biased sampling when dose-response relationship exists in extreme-value sampling designs. © 2011 Wiley Periodicals, Inc.

  15. A sapphire loaded TE011 cavity for surface impedance measurements: design, construction, and commissioning status

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    L. Phillips; G. K. Davis; J. R. Delayen

    2005-07-10

    In order to measure the superconducting surface properties of niobium that are of interest to SRF applications, a facility which utilizes a Nb cavity operating in the TE011 mode at 7.65 GHz which provides a well-defined RF field on a disk shaped sample has been designed and fabricated. The RF losses due to the sample's surface impedance are determined by using a calorimetric technique. The system has the capability to measure such properties as Rs,(T), and penetration depth, which can then be correlated with surface properties and preparation processes. The design, fabrication, and results from initial commissioning operations will bemore » discussed, along with the near term sample evaluation program.« less

  16. Multidimensional System Analysis of Electro-Optic Sensors with Sampled Deterministic Output.

    DTIC Science & Technology

    1987-12-18

    System descriptions of scanning and staring electro - optic sensors with sampled output are developed as follows. Functions representing image...to complete the system descriptions. The results should be useful for designing electro - optic sensor systems and correcting data for instrumental...effects and other experimental conditions. Keywords include: Electro - optic system analysis, Scanning sensors, Staring sensors, Spatial sampling, and Temporal sampling.

  17. A Comparison of Three Online Recruitment Strategies for Engaging Parents

    PubMed Central

    Dworkin, Jodi; Hessel, Heather; Gliske, Kate; Rudi, Jessie H.

    2017-01-01

    Family scientists can face the challenge of effectively and efficiently recruiting normative samples of parents and families. Utilizing the Internet to recruit parents is a strategic way to find participants where they already are, enabling researchers to overcome many of the barriers to in-person recruitment. The present study was designed to compare three online recruitment strategies for recruiting parents: e-mail Listservs, Facebook, and Amazon Mechanical Turk (MTurk). Analyses revealed differences in the effectiveness and efficiency of data collection. In particular, MTurk resulted in the most demographically diverse sample, in a short period of time, with little cost. Listservs reached a large number of participants and resulted in a comparatively homogeneous sample. Facebook was not successful in recruiting a general sample of parents. Findings provide information that can help family researchers and practitioners be intentional about recruitment strategies and study design. PMID:28804184

  18. A Comparison of Three Online Recruitment Strategies for Engaging Parents.

    PubMed

    Dworkin, Jodi; Hessel, Heather; Gliske, Kate; Rudi, Jessie H

    2016-10-01

    Family scientists can face the challenge of effectively and efficiently recruiting normative samples of parents and families. Utilizing the Internet to recruit parents is a strategic way to find participants where they already are, enabling researchers to overcome many of the barriers to in-person recruitment. The present study was designed to compare three online recruitment strategies for recruiting parents: e-mail Listservs, Facebook, and Amazon Mechanical Turk (MTurk). Analyses revealed differences in the effectiveness and efficiency of data collection. In particular, MTurk resulted in the most demographically diverse sample, in a short period of time, with little cost. Listservs reached a large number of participants and resulted in a comparatively homogeneous sample. Facebook was not successful in recruiting a general sample of parents. Findings provide information that can help family researchers and practitioners be intentional about recruitment strategies and study design.

  19. Metal Resistivity Measuring Device

    DOEpatents

    Renken, Jr, C. J.; Myers, R. G.

    1960-12-20

    An eddy current device is designed for detecting discontinuities in metal samples. Alternate short and long duration pulses are inductively applied to a metal sample via the outer coil of a probe. The lorg pulses give a resultant signal from the metal sample responsive to probe-tosample spacing and discontinuities with the sample, and the short pulses give a resultant signal responsive only to probe-to-sample spacing. The inner coil of the probe detects the two resultant signals and transmits them to a separation network where the two signals are separated. The two separated signals are then transmitted to a compensation network where the detected signals due to the short pulses are used to compensate for variations due to probeto-sample spacing contained in the detected signals from the long pulses. Thus a resultant signal is obtained responsive to discontinuities within the sample and independent of probe-to- sample spacing.

  20. The development of a Martian atmospheric Sample collection canister

    NASA Astrophysics Data System (ADS)

    Kulczycki, E.; Galey, C.; Kennedy, B.; Budney, C.; Bame, D.; Van Schilfgaarde, R.; Aisen, N.; Townsend, J.; Younse, P.; Piacentine, J.

    The collection of an atmospheric sample from Mars would provide significant insight to the understanding of the elemental composition and sub-surface out-gassing rates of noble gases. A team of engineers at the Jet Propulsion Laboratory (JPL), California Institute of Technology have developed an atmospheric sample collection canister for Martian application. The engineering strategy has two basic elements: first, to collect two separately sealed 50 cubic centimeter unpressurized atmospheric samples with minimal sensing and actuation in a self contained pressure vessel; and second, to package this atmospheric sample canister in such a way that it can be easily integrated into the orbiting sample capsule for collection and return to Earth. Sample collection and integrity are demonstrated by emulating the atmospheric collection portion of the Mars Sample Return mission on a compressed timeline. The test results achieved by varying the pressure inside of a thermal vacuum chamber while opening and closing the valve on the sample canister at Mars ambient pressure. A commercial off-the-shelf medical grade micro-valve is utilized in the first iteration of this design to enable rapid testing of the system. The valve has been independently leak tested at JPL to quantify and separate the leak rates associated with the canister. The results are factored in to an overall system design that quantifies mass, power, and sensing requirements for a Martian atmospheric Sample Collection (MASC) canister as outlined in the Mars Sample Return mission profile. Qualitative results include the selection of materials to minimize sample contamination, preliminary science requirements, priorities in sample composition, flight valve selection criteria, a storyboard from sample collection to loading in the orbiting sample capsule, and contributions to maintaining “ Earth” clean exterior surfaces on the orbiting sample capsule.

  1. A Taylor Expansion-Based Adaptive Design Strategy for Global Surrogate Modeling With Applications in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; Zhang, Guannan; Ye, Ming; Wu, Jianfeng; Wu, Jichun

    2017-12-01

    Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we develop a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.

  2. Molecular-beam gas-sampling system

    NASA Technical Reports Server (NTRS)

    Young, W. S.; Knuth, E. L.

    1972-01-01

    A molecular beam mass spectrometer system for rocket motor combustion chamber sampling is described. The history of the sampling system is reviewed. The problems associated with rocket motor combustion chamber sampling are reported. Several design equations are presented. The results of the experiments include the effects of cooling water flow rates, the optimum separation gap between the end plate and sampling nozzle, and preliminary data on compositions in a rocket motor combustion chamber.

  3. Field guide for collecting and processing stream-water samples for the National Water-Quality Assessment Program

    USGS Publications Warehouse

    Shelton, Larry R.

    1994-01-01

    The U.S. Geological Survey's National Water-Quality Assessment program includes extensive data- collection efforts to assess the quality of the Nations's streams. These studies require analyses of stream samples for major ions, nutrients, sediments, and organic contaminants. For the information to be comparable among studies in different parts of the Nation, consistent procedures specifically designed to produce uncontaminated samples for trace analysis in the laboratory are critical. This field guide describes the standard procedures for collecting and processing samples for major ions, nutrients, organic contaminants, sediment, and field analyses of conductivity, pH, alkalinity, and dissolved oxygen. Samples are collected and processed using modified and newly designed equipment made of Teflon to avoid contamination, including nonmetallic samplers (D-77 and DH-81) and a Teflon sample splitter. Field solid-phase extraction procedures developed to process samples for organic constituent analyses produce an extracted sample with stabilized compounds for more accurate results. Improvements to standard operational procedures include the use of processing chambers and capsule filtering systems. A modified collecting and processing procedure for organic carbon is designed to avoid contamination from equipment cleaned with methanol. Quality assurance is maintained by strict collecting and processing procedures, replicate sampling, equipment blank samples, and a rigid cleaning procedure using detergent, hydrochloric acid, and methanol.

  4. THE EFFECT OF VARYING ELECTROFISHING DESIGN ON BIOASSESSMENT RESULTS OF FOUR LARGE RIVERS IN THE OHIO RIVER BASIN

    EPA Science Inventory

    In 1999, the effect of electrofishing design (single bank or paired banks) and sampling distance on bioassessment results was studied in four boatable rivers in the Ohio River basin. The relationship between the number of species collected and the total distance electrofished wa...

  5. Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Wilkinson, C. A.

    1997-01-01

    A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.

  6. Water-quality trend analysis and sampling design for streams in North Dakota, 1971-2000

    USGS Publications Warehouse

    Vecchia, Aldo V.

    2003-01-01

    This report presents the results of a study conducted by the U.S. Geological Survey, in cooperation with the North Dakota Department of Health, to analyze historical water-quality trends in selected dissolved major ions, nutrients, and dissolved trace metals for 10 streams in southwestern and eastern North Dakota and to develop an efficient sampling design to monitor future water-quality trends. A time-series model for daily streamflow and constituent concentration was used to identify significant concentration trends, separate natural hydroclimatic variability in concentration from variability that could have resulted from anthropogenic causes, and evaluate various sampling designs to monitor future water-quality trends. The interannual variability in concentration as a result of variability in streamflow, referred to as the annual concentration anomaly, generally was high for all constituents and streams used in the trend analysis and was particularly sensitive to the severe drought that occurred in the late 1980's and the very wet period that began in 1993 and has persisted to the present (2002). Although climatic conditions were similar across North Dakota during the trend-analysis period (1971-2000), significant differences occurred in the annual concentration anomalies from constituent to constituent and location to location, especially during the drought and the wet period. Numerous trends were detected in the historical constituent concentrations after the annual concentration anomalies were removed. The trends within each of the constituent groups (major ions, nutrients, and trace metals) showed general agreement among the streams. For most locations, the largest dissolved major-ion concentrations occurred during the late 1970's and concentrations in the mid- to late 1990's were smaller than concentrations during the late 1970's. However, the largest concentrations for three of the Missouri River tributaries and one of the Red River of the North tributaries occurred during the mid- to late 1990's. Concentration trends for total ammonia plus organic nitrogen showed close agreement among the streams for which that constituent was evaluated. The largest concentrations occurred during the early 1980's, and the smallest concentrations occurred during the early 1990's. Nutrient data were not available for the early 1970's or late 1990's. Although a detailed analysis of the causes of the trends was beyond the scope of this report, a preliminary analysis of cropland, livestock-inventory, and oil-production data for 1971-2000 indicated the concentration trends may be related to the livestock-inventory and oil-production activities in the basins. Dissolved iron and manganese concentrations for the southwestern North Dakota streams generally remained stable during 1971-2000. However, many of the recorded concentrations for those streams were less than the detection limit, and trends that were masked by censoring may have occurred. Several significant trends were detected in dissolved iron and manganese concentrations for the eastern North Dakota streams. Concentrations for those streams either remained stable or increased during most of the 1970's and then decreased rapidly for about 2 years beginning in the late 1970's. The concentrations were relatively stable from the early 1980's to 2000 except at two locations where dissolved iron concentrations increased during the early 1990's. The most efficient overall sampling designs for the detection of annual trends (that is, trends that occur uniformly during the entire year) consisted of balanced designs in which the sampling dates and the number of samples collected remained fixed from year to year and in which the samples were collected throughout the year rather than in a short timespan. The best overall design for the detection of annual trends consisted of three samples per year, with samples collected near the beginning of December, April, and August. That design had acceptable sensitivity for the detection of trends in most constituents at all locations. Little improvement in sensitivity was achieved by collecting more than three samples per year.The sampling designs that were first evaluated for annual trends also were evaluated with regard to their sensitivity to detect seasonal trends that occurred during three seasons--April through August, August through December, and December through April. Design results indicated that an average of one extra sample per station per year resulted in an efficient design for detecting seasonal trends. However, allocation of the extra samples varied depending on the station, month, and constituent group (major ions, nutrients, and trace metals).

  7. Design and implementation of an optical Gaussian noise generator

    NASA Astrophysics Data System (ADS)

    Za~O, Leonardo; Loss, Gustavo; Coelho, Rosângela

    2009-08-01

    A design of a fast and accurate optical Gaussian noise generator is proposed and demonstrated. The noise sample generation is based on the Box-Muller algorithm. The functions implementation was performed on a high-speed Altera Stratix EP1S25 field-programmable gate array (FPGA) development kit. It enabled the generation of 150 million 16-bit noise samples per second. The Gaussian noise generator required only 7.4% of the FPGA logic elements, 1.2% of the RAM memory, 0.04% of the ROM memory, and a laser source. The optical pulses were generated by a laser source externally modulated by the data bit samples using the frequency-shift keying technique. The accuracy of the noise samples was evaluated for different sequences size and confidence intervals. The noise sample pattern was validated by the Bhattacharyya distance (Bd) and the autocorrelation function. The results showed that the proposed design of the optical Gaussian noise generator is very promising to evaluate the performance of optical communications channels with very low bit-error-rate values.

  8. A robust approach to optimal matched filter design in ultrasonic non-destructive evaluation (NDE)

    NASA Astrophysics Data System (ADS)

    Li, Minghui; Hayward, Gordon

    2017-02-01

    The matched filter was demonstrated to be a powerful yet efficient technique to enhance defect detection and imaging in ultrasonic non-destructive evaluation (NDE) of coarse grain materials, provided that the filter was properly designed and optimized. In the literature, in order to accurately approximate the defect echoes, the design utilized the real excitation signals, which made it time consuming and less straightforward to implement in practice. In this paper, we present a more robust and flexible approach to optimal matched filter design using the simulated excitation signals, and the control parameters are chosen and optimized based on the real scenario of array transducer, transmitter-receiver system response, and the test sample, as a result, the filter response is optimized and depends on the material characteristics. Experiments on industrial samples are conducted and the results confirm the great benefits of the method.

  9. Geospatial techniques for developing a sampling frame of watersheds across a region

    USGS Publications Warehouse

    Gresswell, Robert E.; Bateman, Douglas S.; Lienkaemper, George; Guy, T.J.

    2004-01-01

    Current land-management decisions that affect the persistence of native salmonids are often influenced by studies of individual sites that are selected based on judgment and convenience. Although this approach is useful for some purposes, extrapolating results to areas that were not sampled is statistically inappropriate because the sampling design is usually biased. Therefore, in recent investigations of coastal cutthroat trout (Oncorhynchus clarki clarki) located above natural barriers to anadromous salmonids, we used a methodology for extending the statistical scope of inference. The purpose of this paper is to apply geospatial tools to identify a population of watersheds and develop a probability-based sampling design for coastal cutthroat trout in western Oregon, USA. The population of mid-size watersheds (500-5800 ha) west of the Cascade Range divide was derived from watershed delineations based on digital elevation models. Because a database with locations of isolated populations of coastal cutthroat trout did not exist, a sampling frame of isolated watersheds containing cutthroat trout had to be developed. After the sampling frame of watersheds was established, isolated watersheds with coastal cutthroat trout were stratified by ecoregion and erosion potential based on dominant bedrock lithology (i.e., sedimentary and igneous). A stratified random sample of 60 watersheds was selected with proportional allocation in each stratum. By comparing watershed drainage areas of streams in the general population to those in the sampling frame and the resulting sample (n = 60), we were able to evaluate the how representative the subset of watersheds was in relation to the population of watersheds. Geospatial tools provided a relatively inexpensive means to generate the information necessary to develop a statistically robust, probability-based sampling design.

  10. Investigating Test Equating Methods in Small Samples through Various Factors

    ERIC Educational Resources Information Center

    Asiret, Semih; Sünbül, Seçil Ömür

    2016-01-01

    In this study, equating methods for random group design using small samples through factors such as sample size, difference in difficulty between forms, and guessing parameter was aimed for comparison. Moreover, which method gives better results under which conditions was also investigated. In this study, 5,000 dichotomous simulated data…

  11. Declustering of clustered preferential sampling for histogram and semivariogram inference

    USGS Publications Warehouse

    Olea, R.A.

    2007-01-01

    Measurements of attributes obtained more as a consequence of business ventures than sampling design frequently result in samplings that are preferential both in location and value, typically in the form of clusters along the pay. Preferential sampling requires preprocessing for the purpose of properly inferring characteristics of the parent population, such as the cumulative distribution and the semivariogram. Consideration of the distance to the nearest neighbor allows preparation of resampled sets that produce comparable results to those from previously proposed methods. Clustered sampling of size 140, taken from an exhaustive sampling, is employed to illustrate this approach. ?? International Association for Mathematical Geology 2007.

  12. Magnetic Barkhausen Noise Measurements Using Tetrapole Probe Designs

    NASA Astrophysics Data System (ADS)

    McNairnay, Paul

    A magnetic Barkhausen noise (MBN) testing system was developed for Defence Research and Development Canada (DRDC) to perform MBN measurements on the Royal Canadian Navy's Victoria class submarine hulls that can be correlated with material properties, including residual stress. The DRDC system was based on the design of a MBN system developed by Steven White at Queen's University, which was capable of performing rapid angular dependent measurements through the implementation of a flux controlled tetrapole probe. In tetrapole probe designs, the magnetic excitation field is rotated in the surface plane of the sample under the assumption of linear superposition of two orthogonal magnetic fields. During the course of this work, however, the validity of flux superposition in ferromagnetic materials, for the purpose of measuring MBN, was brought into question. Consequently, a study of MBN anisotropy using tetrapole probes was performed. Results indicate that MBN anisotropy measured under flux superposition does not simulate MBN anisotropy data obtained through manual rotation of a single dipole excitation field. It is inferred that MBN anisotropy data obtained with tetrapole probes is the result of the magnetic domain structure's response to an orthogonal magnetization condition and not necessarily to any bulk superposition magnetization in the sample. A qualitative model for the domain configuration under two orthogonal magnetic fields is proposed to describe the results. An empirically derived fitting equation, that describes tetrapole MBN anisotropy data, is presented. The equation describes results in terms of two largely independent orthogonal fields, and includes interaction terms arising due to competing orthogonally magnetized domain structures and interactions with the sample's magnetic easy axis. The equation is used to fit results obtained from a number of samples and tetrapole orientations and in each case correctly identifies the samples' magnetic easy axis.

  13. Correction of sampling bias in a cross-sectional study of post-surgical complications.

    PubMed

    Fluss, Ronen; Mandel, Micha; Freedman, Laurence S; Weiss, Inbal Salz; Zohar, Anat Ekka; Haklai, Ziona; Gordon, Ethel-Sherry; Simchen, Elisheva

    2013-06-30

    Cross-sectional designs are often used to monitor the proportion of infections and other post-surgical complications acquired in hospitals. However, conventional methods for estimating incidence proportions when applied to cross-sectional data may provide estimators that are highly biased, as cross-sectional designs tend to include a high proportion of patients with prolonged hospitalization. One common solution is to use sampling weights in the analysis, which adjust for the sampling bias inherent in a cross-sectional design. The current paper describes in detail a method to build weights for a national survey of post-surgical complications conducted in Israel. We use the weights to estimate the probability of surgical site infections following colon resection, and validate the results of the weighted analysis by comparing them with those obtained from a parallel study with a historically prospective design. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Under-sampling trajectory design for compressed sensing based DCE-MRI.

    PubMed

    Liu, Duan-duan; Liang, Dong; Zhang, Na; Liu, Xin; Zhang, Yuan-ting

    2013-01-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) needs high temporal and spatial resolution to accurately estimate quantitative parameters and characterize tumor vasculature. Compressed Sensing (CS) has the potential to accomplish this mutual importance. However, the randomness in CS under-sampling trajectory designed using the traditional variable density (VD) scheme may translate to uncertainty in kinetic parameter estimation when high reduction factors are used. Therefore, accurate parameter estimation using VD scheme usually needs multiple adjustments on parameters of Probability Density Function (PDF), and multiple reconstructions even with fixed PDF, which is inapplicable for DCE-MRI. In this paper, an under-sampling trajectory design which is robust to the change on PDF parameters and randomness with fixed PDF is studied. The strategy is to adaptively segment k-space into low-and high frequency domain, and only apply VD scheme in high-frequency domain. Simulation results demonstrate high accuracy and robustness comparing to VD design.

  15. Using Bayesian Adaptive Trial Designs for Comparative Effectiveness Research: A Virtual Trial Execution.

    PubMed

    Luce, Bryan R; Connor, Jason T; Broglio, Kristine R; Mullins, C Daniel; Ishak, K Jack; Saunders, Elijah; Davis, Barry R

    2016-09-20

    Bayesian and adaptive clinical trial designs offer the potential for more efficient processes that result in lower sample sizes and shorter trial durations than traditional designs. To explore the use and potential benefits of Bayesian adaptive clinical trial designs in comparative effectiveness research. Virtual execution of ALLHAT (Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial) as if it had been done according to a Bayesian adaptive trial design. Comparative effectiveness trial of antihypertensive medications. Patient data sampled from the more than 42 000 patients enrolled in ALLHAT with publicly available data. Number of patients randomly assigned between groups, trial duration, observed numbers of events, and overall trial results and conclusions. The Bayesian adaptive approach and original design yielded similar overall trial conclusions. The Bayesian adaptive trial randomly assigned more patients to the better-performing group and would probably have ended slightly earlier. This virtual trial execution required limited resampling of ALLHAT patients for inclusion in RE-ADAPT (REsearch in ADAptive methods for Pragmatic Trials). Involvement of a data monitoring committee and other trial logistics were not considered. In a comparative effectiveness research trial, Bayesian adaptive trial designs are a feasible approach and potentially generate earlier results and allocate more patients to better-performing groups. National Heart, Lung, and Blood Institute.

  16. Saint Louis region : small sample travel survey

    DOT National Transportation Integrated Search

    1991-02-01

    This report summarizes results of the St. Louis Region Small Sample Travel Survey. A total of 1,446 households participated in the survey, which was designed to collect travel characteristics data from residents of the St. Louis metropolitan region. ...

  17. Flight Tests of N.A.C.A. Nose-slot Cowlings on the BFC-1 Airplane

    NASA Technical Reports Server (NTRS)

    Stickle, George W

    1939-01-01

    The results of flight tests of four nose-slot cowling designs with several variations in each design are presented. The tests were made in the process of developing the nose-slot cowling. The results demonstrate that a nose-slot cowling may be successfully applied to an airplane and that it utilizes the increased slipstream velocity of low-speed operation to produce increased cooling pressure across the engine. A sample design calculation using results from wind-tunnel, flight, and ground tests is given in an appendix to illustrate the design procedure.

  18. Ultra-Gradient Test Cavity for Testing SRF Wafer Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    N.J. Pogue, P.M. McIntyre, A.I. Sattarov, C. Reece

    2010-11-01

    A 1.3 GHz test cavity has been designed to test wafer samples of superconducting materials. This mushroom shaped cavity, operating in TE01 mode, creates a unique distribution of surface fields. The surface magnetic field on the sample wafer is 3.75 times greater than elsewhere on the Niobium cavity surface. This field design is made possible through dielectrically loading the cavity by locating a hemisphere of ultra-pure sapphire just above the sample wafer. The sapphire pulls the fields away from the walls so the maximum field the Nb surface sees is 25% of the surface field on the sample. In thismore » manner, it should be possible to drive the sample wafer well beyond the BCS limit for Niobium while still maintaining a respectable Q. The sapphire's purity must be tested for its loss tangent and dielectric constant to finalize the design of the mushroom test cavity. A sapphire loaded CEBAF cavity has been constructed and tested. The results on the dielectric constant and loss tangent will be presented« less

  19. Network Model-Assisted Inference from Respondent-Driven Sampling Data

    PubMed Central

    Gile, Krista J.; Handcock, Mark S.

    2015-01-01

    Summary Respondent-Driven Sampling is a widely-used method for sampling hard-to-reach human populations by link-tracing over their social networks. Inference from such data requires specialized techniques because the sampling process is both partially beyond the control of the researcher, and partially implicitly defined. Therefore, it is not generally possible to directly compute the sampling weights for traditional design-based inference, and likelihood inference requires modeling the complex sampling process. As an alternative, we introduce a model-assisted approach, resulting in a design-based estimator leveraging a working network model. We derive a new class of estimators for population means and a corresponding bootstrap standard error estimator. We demonstrate improved performance compared to existing estimators, including adjustment for an initial convenience sample. We also apply the method and an extension to the estimation of HIV prevalence in a high-risk population. PMID:26640328

  20. Network Model-Assisted Inference from Respondent-Driven Sampling Data.

    PubMed

    Gile, Krista J; Handcock, Mark S

    2015-06-01

    Respondent-Driven Sampling is a widely-used method for sampling hard-to-reach human populations by link-tracing over their social networks. Inference from such data requires specialized techniques because the sampling process is both partially beyond the control of the researcher, and partially implicitly defined. Therefore, it is not generally possible to directly compute the sampling weights for traditional design-based inference, and likelihood inference requires modeling the complex sampling process. As an alternative, we introduce a model-assisted approach, resulting in a design-based estimator leveraging a working network model. We derive a new class of estimators for population means and a corresponding bootstrap standard error estimator. We demonstrate improved performance compared to existing estimators, including adjustment for an initial convenience sample. We also apply the method and an extension to the estimation of HIV prevalence in a high-risk population.

  1. Effects of sample survey design on the accuracy of classification tree models in species distribution models

    USGS Publications Warehouse

    Edwards, T.C.; Cutler, D.R.; Zimmermann, N.E.; Geiser, L.; Moisen, Gretchen G.

    2006-01-01

    We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by resubstitution rates were similar for each lichen species irrespective of the underlying sample survey form. Cross-validation estimates of prediction accuracies were lower than resubstitution accuracies for all species and both design types, and in all cases were closer to the true prediction accuracies based on the EVALUATION data set. We argue that greater emphasis should be placed on calculating and reporting cross-validation accuracy rates rather than simple resubstitution accuracy rates. Evaluation of the DESIGN and PURPOSIVE tree models on the EVALUATION data set shows significantly lower prediction accuracy for the PURPOSIVE tree models relative to the DESIGN models, indicating that non-probabilistic sample surveys may generate models with limited predictive capability. These differences were consistent across all four lichen species, with 11 of the 12 possible species and sample survey type comparisons having significantly lower accuracy rates. Some differences in accuracy were as large as 50%. The classification tree structures also differed considerably both among and within the modelled species, depending on the sample survey form. Overlap in the predictor variables selected by the DESIGN and PURPOSIVE tree models ranged from only 20% to 38%, indicating the classification trees fit the two evaluated survey forms on different sets of predictor variables. The magnitude of these differences in predictor variables throws doubt on ecological interpretation derived from prediction models based on non-probabilistic sample surveys. ?? 2006 Elsevier B.V. All rights reserved.

  2. Sample size calculations for stepped wedge and cluster randomised trials: a unified approach

    PubMed Central

    Hemming, Karla; Taljaard, Monica

    2016-01-01

    Objectives To clarify and illustrate sample size calculations for the cross-sectional stepped wedge cluster randomized trial (SW-CRT) and to present a simple approach for comparing the efficiencies of competing designs within a unified framework. Study Design and Setting We summarize design effects for the SW-CRT, the parallel cluster randomized trial (CRT), and the parallel cluster randomized trial with before and after observations (CRT-BA), assuming cross-sectional samples are selected over time. We present new formulas that enable trialists to determine the required cluster size for a given number of clusters. We illustrate by example how to implement the presented design effects and give practical guidance on the design of stepped wedge studies. Results For a fixed total cluster size, the choice of study design that provides the greatest power depends on the intracluster correlation coefficient (ICC) and the cluster size. When the ICC is small, the CRT tends to be more efficient; when the ICC is large, the SW-CRT tends to be more efficient and can serve as an alternative design when the CRT is an infeasible design. Conclusion Our unified approach allows trialists to easily compare the efficiencies of three competing designs to inform the decision about the most efficient design in a given scenario. PMID:26344808

  3. Sampled-data design for sliding mode control based on various robust specifications in open quantum system

    NASA Astrophysics Data System (ADS)

    Ji, Yinghua; Ju-Ju, Hu; Jian-Hua, Huang; Qiang, Ke

    Due to the influence of decoherence, the quantum state probably evolves from the initial pure state to the mixed state, resulting in loss of fidelity, coherence and purity, which is deteriorating for quantum information transmission. Thus, in quantum engineering, quantum control should not only realize the transfer and track of quantum states through manipulation of the external electromagnetic field but also enhance the robustness against decoherence. In this paper, we aim to design a control law to steer the system into the sliding mode domain and maintain it in that domain when bounded uncertainties exist in the system Hamiltonian. We first define the required control performance by fidelity, degree of coherence and purity in terms of the uncertainty of the Hamiltonian in Markovian open quantum system. By characterizing the required robustness using a sliding mode domain, a sampled-data design method is introduced for decoherence control in the quantum system. Furthermore, utilizing the sampled data, a control scheme has been designed on the basis of sliding mode control, and the choice of sampling operator and driving of quantum state during the sampling by the Lyapunov control method are discussed.

  4. A sampling plan for riparian birds of the Lower Colorado River-Final Report

    USGS Publications Warehouse

    Bart, Jonathan; Dunn, Leah; Leist, Amy

    2010-01-01

    A sampling plan was designed for the Bureau of Reclamation for selected riparian birds occurring along the Colorado River from Lake Mead to the southerly International Boundary with Mexico. The goals of the sampling plan were to estimate long-term trends in abundance and investigate habitat relationships especially in new habitat being created by the Bureau of Reclamation. The initial objective was to design a plan for the Gila Woodpecker (Melanerpes uropygialis), Arizona Bell's Vireo (Vireo bellii arizonae), Sonoran Yellow Warbler (Dendroica petechia sonorana), Summer Tanager (Piranga rubra), Gilded Flicker (Colaptes chrysoides), and Vermilion Flycatcher (Pyrocephalus rubinus); however, too little data were obtained for the last two species. Recommendations were therefore based on results for the first four species. The study area was partitioned into plots of 7 to 23 hectares. Plot borders were drawn to place the best habitat for the focal species in the smallest number of plots so that survey efforts could be concentrated on these habitats. Double sampling was used in the survey. In this design, a large sample of plots is surveyed a single time, yielding estimates of unknown accuracy, and a subsample is surveyed intensively to obtain accurate estimates. The subsample is used to estimate detection ratios, which are then applied to the results from the extensive survey to obtain unbiased estimates of density and population size. These estimates are then used to estimate long-term trends in abundance. Four sampling plans for selecting plots were evaluated based on a simulation using data from the Breeding Bird Survey. The design with the highest power involved selecting new plots every year. Power with 80 plots surveyed per year was more than 80 percent for three of the four species. Results from the surveys were used to provide recommendations to the Bureau of Reclamation for their surveys of new habitat being created in the study area.

  5. Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards

    PubMed Central

    Smith, Justin D.

    2013-01-01

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have precluded widespread implementation and acceptance of the SCED as a viable complementary methodology to the predominant group design. This article includes a description of the research design, measurement, and analysis domains distinctive to the SCED; a discussion of the results within the framework of contemporary standards and guidelines in the field; and a presentation of updated benchmarks for key characteristics (e.g., baseline sampling, method of analysis), and overall, it provides researchers and reviewers with a resource for conducting and evaluating SCED research. The results of the systematic review of 409 studies suggest that recently published SCED research is largely in accordance with contemporary criteria for experimental quality. Analytic method emerged as an area of discord. Comparison of the findings of this review with historical estimates of the use of statistical analysis indicates an upward trend, but visual analysis remains the most common analytic method and also garners the most support amongst those entities providing SCED standards. Although consensus exists along key dimensions of single-case research design and researchers appear to be practicing within these parameters, there remains a need for further evaluation of assessment and sampling techniques and data analytic methods. PMID:22845874

  6. Prospective power calculations for the Four Lab study of a multigenerational reproductive/developmental toxicity rodent bioassay using a complex mixture of disinfection by-products in the low-response region.

    PubMed

    Dingus, Cheryl A; Teuschler, Linda K; Rice, Glenn E; Simmons, Jane Ellen; Narotsky, Michael G

    2011-10-01

    In complex mixture toxicology, there is growing emphasis on testing environmentally representative doses that improve the relevance of results for health risk assessment, but are typically much lower than those used in traditional toxicology studies. Traditional experimental designs with typical sample sizes may have insufficient statistical power to detect effects caused by environmentally relevant doses. Proper study design, with adequate statistical power, is critical to ensuring that experimental results are useful for environmental health risk assessment. Studies with environmentally realistic complex mixtures have practical constraints on sample concentration factor and sample volume as well as the number of animals that can be accommodated. This article describes methodology for calculation of statistical power for non-independent observations for a multigenerational rodent reproductive/developmental bioassay. The use of the methodology is illustrated using the U.S. EPA's Four Lab study in which rodents were exposed to chlorinated water concentrates containing complex mixtures of drinking water disinfection by-products. Possible experimental designs included two single-block designs and a two-block design. Considering the possible study designs and constraints, a design of two blocks of 100 females with a 40:60 ratio of control:treated animals and a significance level of 0.05 yielded maximum prospective power (~90%) to detect pup weight decreases, while providing the most power to detect increased prenatal loss.

  7. Prospective Power Calculations for the Four Lab Study of A Multigenerational Reproductive/Developmental Toxicity Rodent Bioassay Using A Complex Mixture of Disinfection By-Products in the Low-Response Region

    PubMed Central

    Dingus, Cheryl A.; Teuschler, Linda K.; Rice, Glenn E.; Simmons, Jane Ellen; Narotsky, Michael G.

    2011-01-01

    In complex mixture toxicology, there is growing emphasis on testing environmentally representative doses that improve the relevance of results for health risk assessment, but are typically much lower than those used in traditional toxicology studies. Traditional experimental designs with typical sample sizes may have insufficient statistical power to detect effects caused by environmentally relevant doses. Proper study design, with adequate statistical power, is critical to ensuring that experimental results are useful for environmental health risk assessment. Studies with environmentally realistic complex mixtures have practical constraints on sample concentration factor and sample volume as well as the number of animals that can be accommodated. This article describes methodology for calculation of statistical power for non-independent observations for a multigenerational rodent reproductive/developmental bioassay. The use of the methodology is illustrated using the U.S. EPA’s Four Lab study in which rodents were exposed to chlorinated water concentrates containing complex mixtures of drinking water disinfection by-products. Possible experimental designs included two single-block designs and a two-block design. Considering the possible study designs and constraints, a design of two blocks of 100 females with a 40:60 ratio of control:treated animals and a significance level of 0.05 yielded maximum prospective power (~90%) to detect pup weight decreases, while providing the most power to detect increased prenatal loss. PMID:22073030

  8. Sandia Strehl Calculator Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anthony, Stephen M

    The Sandia Strehl Calculator is designed to calculate the Gibson and Lanni point spread function (PSF), Strehl ratio, and ensquared energy, allowing non-design immersion, coverslip, and sample layers. It also uses Abbe number calculations to determine the refractive index at specific wavelengths when given the refractive index at a different wavelength and the dispersion. The primary application of Sandia Strehl Calculator is to determine the theoretical impacts of using an optical microscope beyond its normal design parameters. Examples of non-design microscope usage include: a) using coverslips of non-design material b) coverslips of different thicknesses c) imaging deep into an aqueousmore » sample with an immersion objective d) imaging a sample at 37 degrees. All of these changes can affect the imaging quality, sometimes profoundly, but are at the same time non-design conditions employed not infrequently. Rather than having to experimentally determine whether the changes will result in unacceptable image quality, Sandia Strehl Calculator uses existing optical theory to determine the approximate effect of the change, saving the need to perform experiments.« less

  9. Basic School Teachers' Perceptions about Curriculum Design in Ghana

    ERIC Educational Resources Information Center

    Abudu, Amadu Musah; Mensah, Mary Afi

    2016-01-01

    This study focused on teachers' perceptions about curriculum design and barriers to their participation. The sample size was 130 teachers who responded to a questionnaire. The analyses made use of descriptive statistics and descriptions. The study found that the level of teachers' participation in curriculum design is low. The results further…

  10. [Survey of what is published on Italian nursing journals].

    PubMed

    Bongiorno, Elena; Colleoni, Pasqualina; Casati, Monica

    2005-01-01

    Nursing research is an important activity for nurses; the main aim is to improve the quality of nursing. Several national and european laws have been issued about it. To develop knowledge about nursing, nurses have to understand the results of researches, implement them in the different situation and sometimes carry out researches. The results can be published in nursing journals which a lot of nurses use to share information. This study reviewed the characteristics of research articles published in italian nursing journals from 1998 to 2003. Phenomena of interest are: areas of enquiry, investigators, methods, research design, sampling and means to gather data. 122 articles have been reviewed: 78% focus on clinical aspects, 55% were carry out by nurses, 92% adopt the quantitative approach, 90% used non experimental design, 89% used convenience selection sampling method and 58% answer ways. The characteristics of this study are similar to other studies about italian nursing publication. There are some limits in this type of literature: lower generalization because of lower representativeness of sample, convenience selection sampling method, and higher risk of interference due to frequent use of non experimental design. However the number of italian nurses that carry out researches is increasing and nursing is the most studied area.

  11. Results of external quality-assurance program for the National Atmospheric Deposition Program and National Trends Network during 1985

    USGS Publications Warehouse

    Brooks, M.H.; Schroder, L.J.; Willoughby, T.C.

    1988-01-01

    External quality assurance monitoring of the National Atmospheric Deposition Program (NADP) and National Trends Network (NTN) was performed by the U.S. Geological Survey during 1985. The monitoring consisted of three primary programs: (1) an intersite comparison program designed to assess the precision and accuracy of onsite pH and specific conductance measurements made by NADP and NTN site operators; (2) a blind audit sample program designed to assess the effect of routine field handling on the precision and bias of NADP and NTN wet deposition data; and (3) an interlaboratory comparison program designed to compare analytical data from the laboratory processing NADP and NTN samples with data produced by other laboratories routinely analyzing wet deposition samples and to provide estimates of individual laboratory precision. An average of 94% of the site operators participated in the four voluntary intersite comparisons during 1985. A larger percentage of participating site operators met the accuracy goal for specific conductance measurements (average, 87%) than for pH measurements (average, 67%). Overall precision was dependent on the actual specific conductance of the test solution and independent of the pH of the test solution. Data for the blind audit sample program indicated slight positive biases resulting from routine field handling for all analytes except specific conductance. These biases were not large enough to be significant for most data users. Data for the blind audit sample program also indicated that decreases in hydrogen ion concentration were accompanied by decreases in specific conductance. Precision estimates derived from the blind audit sample program indicate that the major source of uncertainty in wet deposition data is the routine field handling that each wet deposition sample receives. Results of the interlaboratory comparison program were similar to results of previous years ' evaluations, indicating that the participating laboratories produced comparable data when they analyzed identical wet deposition samples, and that the laboratory processing NADP and NTN samples achieved the best analyte precision of the participating laboratories. (Author 's abstract)

  12. Analytical Simulations of Energy-Absorbing Impact Spheres for a Mars Sample Return Earth Entry Vehicle

    NASA Technical Reports Server (NTRS)

    Billings, Marcus Dwight; Fasanella, Edwin L. (Technical Monitor)

    2002-01-01

    Nonlinear dynamic finite element simulations were performed to aid in the design of an energy-absorbing impact sphere for a passive Earth Entry Vehicle (EEV) that is a possible architecture for the Mars Sample Return (MSR) mission. The MSR EEV concept uses an entry capsule and energy-absorbing impact sphere designed to contain and limit the acceleration of collected samples during Earth impact without a parachute. The spherical shaped impact sphere is composed of solid hexagonal and pentagonal foam-filled cells with hybrid composite, graphite-epoxy/Kevlar cell walls. Collected Martian samples will fit inside a smaller spherical sample container at the center of the EEV's cellular structure. Comparisons were made of analytical results obtained using MSC.Dytran with test results obtained from impact tests performed at NASA Langley Research Center for impact velocities from 30 to 40 m/s. Acceleration, velocity, and deformation results compared well with the test results. The correlated finite element model was then used for simulations of various off-nominal impact scenarios. Off-nominal simulations at an impact velocity of 40 m/s included a rotated cellular structure impact onto a flat surface, a cellular structure impact onto an angled surface, and a cellular structure impact onto the corner of a step.

  13. Repeated significance tests of linear combinations of sensitivity and specificity of a diagnostic biomarker

    PubMed Central

    Wu, Mixia; Shu, Yu; Li, Zhaohai; Liu, Aiyi

    2016-01-01

    A sequential design is proposed to test whether the accuracy of a binary diagnostic biomarker meets the minimal level of acceptance. The accuracy of a binary diagnostic biomarker is a linear combination of the marker’s sensitivity and specificity. The objective of the sequential method is to minimize the maximum expected sample size under the null hypothesis that the marker’s accuracy is below the minimal level of acceptance. The exact results of two-stage designs based on Youden’s index and efficiency indicate that the maximum expected sample sizes are smaller than the sample sizes of the fixed designs. Exact methods are also developed for estimation, confidence interval and p-value concerning the proposed accuracy index upon termination of the sequential testing. PMID:26947768

  14. Preliminary design polymeric materials experiment. [for space shuttles and Spacelab missions

    NASA Technical Reports Server (NTRS)

    Mattingly, S. G.; Rude, E. T.; Marshner, R. L.

    1975-01-01

    A typical Advanced Technology Laboratory mission flight plan was developed and used as a guideline for the identification of a number of experiment considerations. The experiment logistics beginning with sample preparation and ending with sample analysis are then overlaid on the mission in order to have a complete picture of the design requirements. The results of this preliminary design study fall into two categories. First specific preliminary designs of experiment hardware which is adaptable to a variety of mission requirements. Second, identification of those mission considerations which affect hardware design and will require further definition prior to final design. Finally, a program plan is presented which will provide the necessary experiment hardware in a realistic time period to match the planned shuttle flights. A bibliography of all material reviewed and consulted but not specifically referenced is provided.

  15. A molecular identification system for grasses: a novel technology for forensic botany.

    PubMed

    Ward, J; Peakall, R; Gilmore, S R; Robertson, J

    2005-09-10

    Our present inability to rapidly, accurately and cost-effectively identify trace botanical evidence remains the major impediment to the routine application of forensic botany. Grasses are amongst the most likely plant species encountered as forensic trace evidence and have the potential to provide links between crime scenes and individuals or other vital crime scene information. We are designing a molecular DNA-based identification system for grasses consisting of several PCR assays that, like a traditional morphological taxonomic key, provide criteria that progressively identify an unknown grass sample to a given taxonomic rank. In a prior study of DNA sequences across 20 phylogenetically representative grass species, we identified a series of potentially informative indels in the grass mitochondrial genome. In this study we designed and tested five PCR assays spanning these indels and assessed the feasibility of these assays to aid identification of unknown grass samples. We confirmed that for our control set of 20 samples, on which the design of the PCR assays was based, the five primer combinations produced the expected results. Using these PCR assays in a 'blind test', we were able to identify 25 unknown grass samples with some restrictions. Species belonging to genera represented in our control set were all correctly identified to genus with one exception. Similarly, genera belonging to tribes in the control set were correctly identified to the tribal level. Finally, for those samples for which neither the tribal or genus specific PCR assays were designed, we could confidently exclude these samples from belonging to certain tribes and genera. The results confirmed the utility of the PCR assays and the feasibility of developing a robust full-scale usable grass identification system for forensic purposes.

  16. Statistical sampling methods for soils monitoring

    Treesearch

    Ann M. Abbott

    2010-01-01

    Development of the best sampling design to answer a research question should be an interactive venture between the land manager or researcher and statisticians, and is the result of answering various questions. A series of questions that can be asked to guide the researcher in making decisions that will arrive at an effective sampling plan are described, and a case...

  17. Conditional Optimal Design in Three- and Four-Level Experiments

    ERIC Educational Resources Information Center

    Hedges, Larry V.; Borenstein, Michael

    2014-01-01

    The precision of estimates of treatment effects in multilevel experiments depends on the sample sizes chosen at each level. It is often desirable to choose sample sizes at each level to obtain the smallest variance for a fixed total cost, that is, to obtain optimal sample allocation. This article extends previous results on optimal allocation to…

  18. A compendium of controlled diffusion blades generated by an automated inverse design procedure

    NASA Technical Reports Server (NTRS)

    Sanz, Jose M.

    1989-01-01

    A set of sample cases was produced to test an automated design procedure developed at the NASA Lewis Research Center for the design of controlled diffusion blades. The range of application of the automated design procedure is documented. The results presented include characteristic compressor and turbine blade sections produced with the automated design code as well as various other airfoils produced with the base design method prior to the incorporation of the automated procedure.

  19. Designing single- and multiple-shell sampling schemes for diffusion MRI using spherical code.

    PubMed

    Cheng, Jian; Shen, Dinggang; Yap, Pew-Thian

    2014-01-01

    In diffusion MRI (dMRI), determining an appropriate sampling scheme is crucial for acquiring the maximal amount of information for data reconstruction and analysis using the minimal amount of time. For single-shell acquisition, uniform sampling without directional preference is usually favored. To achieve this, a commonly used approach is the Electrostatic Energy Minimization (EEM) method introduced in dMRI by Jones et al. However, the electrostatic energy formulation in EEM is not directly related to the goal of optimal sampling-scheme design, i.e., achieving large angular separation between sampling points. A mathematically more natural approach is to consider the Spherical Code (SC) formulation, which aims to achieve uniform sampling by maximizing the minimal angular difference between sampling points on the unit sphere. Although SC is well studied in the mathematical literature, its current formulation is limited to a single shell and is not applicable to multiple shells. Moreover, SC, or more precisely continuous SC (CSC), currently can only be applied on the continuous unit sphere and hence cannot be used in situations where one or several subsets of sampling points need to be determined from an existing sampling scheme. In this case, discrete SC (DSC) is required. In this paper, we propose novel DSC and CSC methods for designing uniform single-/multi-shell sampling schemes. The DSC and CSC formulations are solved respectively by Mixed Integer Linear Programming (MILP) and a gradient descent approach. A fast greedy incremental solution is also provided for both DSC and CSC. To our knowledge, this is the first work to use SC formulation for designing sampling schemes in dMRI. Experimental results indicate that our methods obtain larger angular separation and better rotational invariance than the generalized EEM (gEEM) method currently used in the Human Connectome Project (HCP).

  20. Sampling design for spatially distributed hydrogeologic and environmental processes

    USGS Publications Warehouse

    Christakos, G.; Olea, R.A.

    1992-01-01

    A methodology for the design of sampling networks over space is proposed. The methodology is based on spatial random field representations of nonhomogeneous natural processes, and on optimal spatial estimation techniques. One of the most important results of random field theory for physical sciences is its rationalization of correlations in spatial variability of natural processes. This correlation is extremely important both for interpreting spatially distributed observations and for predictive performance. The extent of site sampling and the types of data to be collected will depend on the relationship of subsurface variability to predictive uncertainty. While hypothesis formulation and initial identification of spatial variability characteristics are based on scientific understanding (such as knowledge of the physics of the underlying phenomena, geological interpretations, intuition and experience), the support offered by field data is statistically modelled. This model is not limited by the geometric nature of sampling and covers a wide range in subsurface uncertainties. A factorization scheme of the sampling error variance is derived, which possesses certain atttactive properties allowing significant savings in computations. By means of this scheme, a practical sampling design procedure providing suitable indices of the sampling error variance is established. These indices can be used by way of multiobjective decision criteria to obtain the best sampling strategy. Neither the actual implementation of the in-situ sampling nor the solution of the large spatial estimation systems of equations are necessary. The required values of the accuracy parameters involved in the network design are derived using reference charts (readily available for various combinations of data configurations and spatial variability parameters) and certain simple yet accurate analytical formulas. Insight is gained by applying the proposed sampling procedure to realistic examples related to sampling problems in two dimensions. ?? 1992.

  1. Water quality monitoring of Sweetwater and Loveland reservoirs--Phase one results 1998-1999

    USGS Publications Warehouse

    Majewski, Michael S.; Sidhu, Jagdeep S.; Mendez, Gregory O.

    2002-01-01

    In 1998, the U.S. Geological Survey began a study to assess the overall health of the watershed feeding the Sweetwater Reservoir in southern San Diego County, California. The study focussed on monitoring for organic chemical contamination and the effects of construction and operation of State Route 125 on water quality. Three environmental compartments (air, water, and bed sediments) are being sampled regularly for chemical contaminants, including volatile organic compounds, polynuclear aromatic hydrocarbons, polychlorinated biphenyls, pesticides, and major and trace elements. The study is divided into two phases. Phase I sampling is designed to establish baseline conditions for target compounds in terms of detection frequency and concentration in air, water, and bed sediments. Phase II sampling will continue at the established monitoring sites during and after construction of State Route 125 to assess chemical impact on water quality in the reservoir resulting from land-use changes and development in the watershed. This report describes the study design, the sampling and analytical methods, and presents the data results for the first year of the study, September 1998 to September 1999.

  2. Detection of Bovine and Porcine Adenoviruses for Tracing the Source of Fecal Contamination

    PubMed Central

    Maluquer de Motes, Carlos; Clemente-Casares, Pilar; Hundesa, Ayalkibet; Martín, Margarita; Girones, Rosina

    2004-01-01

    In this study, a molecular procedure for the detection of adenoviruses of animal origin was developed to evaluate the level of excretion of these viruses by swine and cattle and to design a test to facilitate the tracing of specific sources of environmental viral contamination. Two sets of oligonucleotides were designed, one to detect porcine adenoviruses and the other to detect bovine and ovine adenoviruses. The specificity of the assays was assessed in 31 fecal samples and 12 sewage samples that were collected monthly during a 1-year period. The data also provided information on the environmental prevalence of animal adenoviruses. Porcine adenoviruses were detected in 17 of 24 (70%) pools of swine samples studied, with most isolates being closely related to serotype 3. Bovine adenoviruses were present in 6 of 8 (75%) pools studied, with strains belonging to the genera Mastadenovirus and Atadenovirus and being similar to bovine adenoviruses of types 2, 4, and 7. These sets of primers produced negative results in nested PCR assays when human adenovirus controls and urban-sewage samples were tested. Likewise, the sets of primers previously designed for detection of human adenovirus also produced negative results with animal adenoviruses. These results indicate the importance of further studies to evaluate the usefulness of these tests to trace the source of fecal contamination in water and food and for environmental studies. PMID:15006765

  3. Detection of bovine and porcine adenoviruses for tracing the source of fecal contamination.

    PubMed

    Maluquer de Motes, Carlos; Clemente-Casares, Pilar; Hundesa, Ayalkibet; Martín, Margarita; Girones, Rosina

    2004-03-01

    In this study, a molecular procedure for the detection of adenoviruses of animal origin was developed to evaluate the level of excretion of these viruses by swine and cattle and to design a test to facilitate the tracing of specific sources of environmental viral contamination. Two sets of oligonucleotides were designed, one to detect porcine adenoviruses and the other to detect bovine and ovine adenoviruses. The specificity of the assays was assessed in 31 fecal samples and 12 sewage samples that were collected monthly during a 1-year period. The data also provided information on the environmental prevalence of animal adenoviruses. Porcine adenoviruses were detected in 17 of 24 (70%) pools of swine samples studied, with most isolates being closely related to serotype 3. Bovine adenoviruses were present in 6 of 8 (75%) pools studied, with strains belonging to the genera Mastadenovirus and Atadenovirus and being similar to bovine adenoviruses of types 2, 4, and 7. These sets of primers produced negative results in nested PCR assays when human adenovirus controls and urban-sewage samples were tested. Likewise, the sets of primers previously designed for detection of human adenovirus also produced negative results with animal adenoviruses. These results indicate the importance of further studies to evaluate the usefulness of these tests to trace the source of fecal contamination in water and food and for environmental studies.

  4. Results of Hg speciation testing on DWPF SMECT-8, OGCT-1, AND OGCT-2 samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bannochie, C.

    2016-02-22

    The Savannah River National Laboratory (SRNL) was tasked with preparing and shipping samples for Hg speciation by Eurofins Frontier Global Sciences, Inc. in Seattle, WA on behalf of the Savannah River Remediation (SRR) Mercury Task Team. The sixteenth shipment of samples was designated to include a Defense Waste Processing Facility (DWPF) Slurry Mix Evaporator Condensate Tank (SMECT) sample from Sludge Receipt and Adjustment Tank (SRAT) Batch 738 processing and two Off-Gas Condensate Tank (OGCT) samples, one following Batch 736 and one following Batch 738. The DWPF sample designations for the three samples analyzed are provided. The Batch 738 ‘End ofmore » SME Cycle’ SMECT sample was taken at the conclusion of Slurry Mix Evaporator (SME) operations for this batch and represents the fourth SMECT sample examined from Batch 738. Batch 738 experienced a sludge slurry carryover event, which introduced sludge solids to the SMECT that were particularly evident in the SMECT-5 sample, but less evident in the ‘End of SME Cycle’ SMECT-8 sample.« less

  5. Integrating Public Perspectives in Sample Return Planning

    NASA Technical Reports Server (NTRS)

    Race, Margaret S.; MacGregor, G.

    2001-01-01

    Planning for extraterrestrial sample returns, whether from Mars or other solar system bodies, must be done in a way that integrates planetary protection concerns with the usual mission technical and scientific considerations. Understanding and addressing legitimate societal concerns about the possible risks of sample return will be a critical part of the public decision making process ahead. This paper presents the results of two studies, one with lay audiences, the other with expert microbiologists, designed to gather information, on attitudes and concerns about sample return risks and planetary protection. Focus group interviews with lay subjects, using generic information about Mars sample return and a preliminary environmental impact assessment, were designed to obtain an indication of how the factual content is perceived and understood by the public. A research survey of microbiologists gathered information on experts' views and attitudes about sample return, risk management approaches and space exploration risks. These findings, combined with earlier research results on risk perception, will be useful in identifying levels of concern and potential conflicts in understanding between experts and the public about sample return risks. The information will be helpful in guiding development of the environmental impact statement and also has applicability to proposals for sample return from other solar system bodies where scientific uncertainty about extraterrestrial life may persist at the time of mission planning.

  6. A Taylor Expansion-Based Adaptive Design Strategy for Global Surrogate Modeling With Applications in Groundwater Modeling

    DOE PAGES

    Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; ...

    2017-12-27

    Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less

  7. Simulation-assisted design of microfluidic sample traps for optimal trapping and culture of non-adherent single cells, tissues, and spheroids.

    PubMed

    Rousset, Nassim; Monet, Frédéric; Gervais, Thomas

    2017-03-21

    This work focuses on modelling design and operation of "microfluidic sample traps" (MSTs). MSTs regroup a widely used class of microdevices that incorporate wells, recesses or chambers adjacent to a channel to individually trap, culture and/or release submicroliter 3D tissue samples ranging from simple cell aggregates and spheroids, to ex vivo tissue samples and other submillimetre-scale tissue models. Numerous MST designs employing various trapping mechanisms have been proposed in the literature, spurring the development of 3D tissue models for drug discovery and personalized medicine. Yet, there lacks a general framework to optimize trapping stability, trapping time, shear stress, and sample metabolism. Herein, the effects of hydrodynamics and diffusion-reaction on tissue viability and device operation are investigated using analytical and finite element methods with systematic parametric sweeps over independent design variables chosen to correspond to the four design degrees of freedom. Combining different results, we show that, for a spherical tissue of diameter d < 500 μm, the simplest, closest to optimal trap shape is a cube of dimensions w equal to twice the tissue diameter: w = 2d. Furthermore, to sustain tissues without perfusion, available medium volume per trap needs to be 100× the tissue volume to ensure optimal metabolism for at least 24 hours.

  8. A Taylor Expansion-Based Adaptive Design Strategy for Global Surrogate Modeling With Applications in Groundwater Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing

    Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less

  9. Experimental Design in Clinical 'Omics Biomarker Discovery.

    PubMed

    Forshed, Jenny

    2017-11-03

    This tutorial highlights some issues in the experimental design of clinical 'omics biomarker discovery, how to avoid bias and get as true quantities as possible from biochemical analyses, and how to select samples to improve the chance of answering the clinical question at issue. This includes the importance of defining clinical aim and end point, knowing the variability in the results, randomization of samples, sample size, statistical power, and how to avoid confounding factors by including clinical data in the sample selection, that is, how to avoid unpleasant surprises at the point of statistical analysis. The aim of this Tutorial is to help translational clinical and preclinical biomarker candidate research and to improve the validity and potential of future biomarker candidate findings.

  10. Data for the geochemical investigation of UMTRAP designated site at Durango, Colorado

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Markos, G.; Bush, K.J.

    1983-09-01

    This report contains the methods of collection and the data used in the geochemical investigation of the former tailings and raffinate pond sites at Durango, Colorado. The methods of data interpretation and results of the investigation are described in the report, ''Geochemical Investigation of UMTRAP Designated Site at Durango, Colorado''. Data are from a one-time sampling of waters and solid material from the background, the area adjacent to the site, and the site. The solid samples are water extracted to remove easily soluble salts and acid extracted to remove carbonates and hydroxides. The waters, extracts, and solid samples were analyzedmore » for selected major and trace elements. A few samples were analyzed for radioisotopes.« less

  11. Reduction of Sample Size Requirements by Bilateral Versus Unilateral Research Designs in Animal Models for Cartilage Tissue Engineering

    PubMed Central

    Orth, Patrick; Zurakowski, David; Alini, Mauro; Cucchiarini, Magali

    2013-01-01

    Advanced tissue engineering approaches for articular cartilage repair in the knee joint rely on translational animal models. In these investigations, cartilage defects may be established either in one joint (unilateral design) or in both joints of the same animal (bilateral design). We hypothesized that a lower intraindividual variability following the bilateral strategy would reduce the number of required joints. Standardized osteochondral defects were created in the trochlear groove of 18 rabbits. In 12 animals, defects were produced unilaterally (unilateral design; n=12 defects), while defects were created bilaterally in 6 animals (bilateral design; n=12 defects). After 3 weeks, osteochondral repair was evaluated histologically applying an established grading system. Based on intra- and interindividual variabilities, required sample sizes for the detection of discrete differences in the histological score were determined for both study designs (α=0.05, β=0.20). Coefficients of variation (%CV) of the total histological score values were 1.9-fold increased following the unilateral design when compared with the bilateral approach (26 versus 14%CV). The resulting numbers of joints needed to treat were always higher for the unilateral design, resulting in an up to 3.9-fold increase in the required number of experimental animals. This effect was most pronounced for the detection of small-effect sizes and estimating large standard deviations. The data underline the possible benefit of bilateral study designs for the decrease of sample size requirements for certain investigations in articular cartilage research. These findings might also be transferred to other scoring systems, defect types, or translational animal models in the field of cartilage tissue engineering. PMID:23510128

  12. Systematic review finds major deficiencies in sample size methodology and reporting for stepped-wedge cluster randomised trials

    PubMed Central

    Martin, James; Taljaard, Monica; Girling, Alan; Hemming, Karla

    2016-01-01

    Background Stepped-wedge cluster randomised trials (SW-CRT) are increasingly being used in health policy and services research, but unless they are conducted and reported to the highest methodological standards, they are unlikely to be useful to decision-makers. Sample size calculations for these designs require allowance for clustering, time effects and repeated measures. Methods We carried out a methodological review of SW-CRTs up to October 2014. We assessed adherence to reporting each of the 9 sample size calculation items recommended in the 2012 extension of the CONSORT statement to cluster trials. Results We identified 32 completed trials and 28 independent protocols published between 1987 and 2014. Of these, 45 (75%) reported a sample size calculation, with a median of 5.0 (IQR 2.5–6.0) of the 9 CONSORT items reported. Of those that reported a sample size calculation, the majority, 33 (73%), allowed for clustering, but just 15 (33%) allowed for time effects. There was a small increase in the proportions reporting a sample size calculation (from 64% before to 84% after publication of the CONSORT extension, p=0.07). The type of design (cohort or cross-sectional) was not reported clearly in the majority of studies, but cohort designs seemed to be most prevalent. Sample size calculations in cohort designs were particularly poor with only 3 out of 24 (13%) of these studies allowing for repeated measures. Discussion The quality of reporting of sample size items in stepped-wedge trials is suboptimal. There is an urgent need for dissemination of the appropriate guidelines for reporting and methodological development to match the proliferation of the use of this design in practice. Time effects and repeated measures should be considered in all SW-CRT power calculations, and there should be clarity in reporting trials as cohort or cross-sectional designs. PMID:26846897

  13. [Design and evaluation of DAVIH VIH-2].

    PubMed

    Martín Alfonso, Dayamí; Silva Cabrera, Eladio; Pérez Guevara, María T; Díaz Herrera, Dervel F; Romero Martínez, Kenia; Díaz Torres, Héctor M; Lubián Caballero, Ana L; Ruiz Gutiérrez, Nancy; Ortiz Losada, Eva

    2007-01-01

    The results of the design and evaluation of DAVIH VIH-2 diagnosing system, an indirect Elisa for screening of HIV-2 antibodies, which uses a HIV-2 glycoprotein gp36 synthetic peptide in its solid phase, were exposed. In the system evaluation using WHO reference panels, 100% sensitivity, 99,81% specificity, 99,81% efficacy and very good concordance level (kappa = 0.978) were attained. Serum samples of 959 individuals with undetermined or negative results to the HIV-1 antibodies confirmation (DAVIH blot) were evaluated by the DAVIH VIH-2 system. Twenty four samples were reactive, six of which had confirmed HIV-2 antibodies. These results allowed recommending the introduction of this diagnostic kit in the HIV infection diagnosing algorithm in Cuba.

  14. Sampling designs for contaminant temporal trend analyses using sedentary species exemplified by the snails Bellamya aeruginosa and Viviparus viviparus.

    PubMed

    Yin, Ge; Danielsson, Sara; Dahlberg, Anna-Karin; Zhou, Yihui; Qiu, Yanling; Nyberg, Elisabeth; Bignert, Anders

    2017-10-01

    Environmental monitoring typically assumes samples and sampling activities to be representative of the population being studied. Given a limited budget, an appropriate sampling strategy is essential to support detecting temporal trends of contaminants. In the present study, based on real chemical analysis data on polybrominated diphenyl ethers in snails collected from five subsites in Tianmu Lake, computer simulation is performed to evaluate three sampling strategies by the estimation of required sample size, to reach a detection of an annual change of 5% with a statistical power of 80% and 90% with a significant level of 5%. The results showed that sampling from an arbitrarily selected sampling spot is the worst strategy, requiring much more individual analyses to achieve the above mentioned criteria compared with the other two approaches. A fixed sampling site requires the lowest sample size but may not be representative for the intended study object e.g. a lake and is also sensitive to changes of that particular sampling site. In contrast, sampling at multiple sites along the shore each year, and using pooled samples when the cost to collect and prepare individual specimens are much lower than the cost for chemical analysis, would be the most robust and cost efficient strategy in the long run. Using statistical power as criterion, the results demonstrated quantitatively the consequences of various sampling strategies, and could guide users with respect of required sample sizes depending on sampling design for long term monitoring programs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Comparison of optimal design methods in inverse problems

    NASA Astrophysics Data System (ADS)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  16. 78 FR 60321 - Biweekly Notice; Applications and Amendments to Facility Operating Licenses and Combined Licenses...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-01

    ... sample selection. A steam generator tube rupture (SGTR) event is one of the design basis accidents that... in the design basis accident analysis. The proposed change will not cause the consequences of a SGTR... changes to the plant design basis or postulated accidents resulting from potential tube degradation. The...

  17. The Monitoring the Future Project After Thirty-Two Years: Design and Procedures. Monitoring the Future Occasional Paper 64

    ERIC Educational Resources Information Center

    Bachman, Jerald G.; Johnston, Lloyd D.; O'Malley, Patrick M.; Schulenberg, John E.

    2006-01-01

    This occasional paper updates and extends earlier papers in the Monitoring the Future project. It provides a detailed description of the project's design, including sampling design, data collection procedures, measurement content, and questionnaire format. It attempts to include sufficient information for others who wish to evaluate the results,…

  18. An Integrated Tool for System Analysis of Sample Return Vehicles

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.; Maddock, Robert W.; Winski, Richard G.

    2012-01-01

    The next important step in space exploration is the return of sample materials from extraterrestrial locations to Earth for analysis. Most mission concepts that return sample material to Earth share one common element: an Earth entry vehicle. The analysis and design of entry vehicles is multidisciplinary in nature, requiring the application of mass sizing, flight mechanics, aerodynamics, aerothermodynamics, thermal analysis, structural analysis, and impact analysis tools. Integration of a multidisciplinary problem is a challenging task; the execution process and data transfer among disciplines should be automated and consistent. This paper describes an integrated analysis tool for the design and sizing of an Earth entry vehicle. The current tool includes the following disciplines: mass sizing, flight mechanics, aerodynamics, aerothermodynamics, and impact analysis tools. Python and Java languages are used for integration. Results are presented and compared with the results from previous studies.

  19. Active Learning Using Hint Information.

    PubMed

    Li, Chun-Liang; Ferng, Chun-Sung; Lin, Hsuan-Tien

    2015-08-01

    The abundance of real-world data and limited labeling budget calls for active learning, an important learning paradigm for reducing human labeling efforts. Many recently developed active learning algorithms consider both uncertainty and representativeness when making querying decisions. However, exploiting representativeness with uncertainty concurrently usually requires tackling sophisticated and challenging learning tasks, such as clustering. In this letter, we propose a new active learning framework, called hinted sampling, which takes both uncertainty and representativeness into account in a simpler way. We design a novel active learning algorithm within the hinted sampling framework with an extended support vector machine. Experimental results validate that the novel active learning algorithm can result in a better and more stable performance than that achieved by state-of-the-art algorithms. We also show that the hinted sampling framework allows improving another active learning algorithm designed from the transductive support vector machine.

  20. A post hoc evaluation of a sample size re-estimation in the Secondary Prevention of Small Subcortical Strokes study.

    PubMed

    McClure, Leslie A; Szychowski, Jeff M; Benavente, Oscar; Hart, Robert G; Coffey, Christopher S

    2016-10-01

    The use of adaptive designs has been increasing in randomized clinical trials. Sample size re-estimation is a type of adaptation in which nuisance parameters are estimated at an interim point in the trial and the sample size re-computed based on these estimates. The Secondary Prevention of Small Subcortical Strokes study was a randomized clinical trial assessing the impact of single- versus dual-antiplatelet therapy and control of systolic blood pressure to a higher (130-149 mmHg) versus lower (<130 mmHg) target on recurrent stroke risk in a two-by-two factorial design. A sample size re-estimation was performed during the Secondary Prevention of Small Subcortical Strokes study resulting in an increase from the planned sample size of 2500-3020, and we sought to determine the impact of the sample size re-estimation on the study results. We assessed the results of the primary efficacy and safety analyses with the full 3020 patients and compared them to the results that would have been observed had randomization ended with 2500 patients. The primary efficacy outcome considered was recurrent stroke, and the primary safety outcomes were major bleeds and death. We computed incidence rates for the efficacy and safety outcomes and used Cox proportional hazards models to examine the hazard ratios for each of the two treatment interventions (i.e. the antiplatelet and blood pressure interventions). In the antiplatelet intervention, the hazard ratio was not materially modified by increasing the sample size, nor did the conclusions regarding the efficacy of mono versus dual-therapy change: there was no difference in the effect of dual- versus monotherapy on the risk of recurrent stroke hazard ratios (n = 3020 HR (95% confidence interval): 0.92 (0.72, 1.2), p = 0.48; n = 2500 HR (95% confidence interval): 1.0 (0.78, 1.3), p = 0.85). With respect to the blood pressure intervention, increasing the sample size resulted in less certainty in the results, as the hazard ratio for higher versus lower systolic blood pressure target approached, but did not achieve, statistical significance with the larger sample (n = 3020 HR (95% confidence interval): 0.81 (0.63, 1.0), p = 0.089; n = 2500 HR (95% confidence interval): 0.89 (0.68, 1.17), p = 0.40). The results from the safety analyses were similar to 3020 and 2500 patients for both study interventions. Other trial-related factors, such as contracts, finances, and study management, were impacted as well. Adaptive designs can have benefits in randomized clinical trials, but do not always result in significant findings. The impact of adaptive designs should be measured in terms of both trial results, as well as practical issues related to trial management. More post hoc analyses of study adaptations will lead to better understanding of the balance between the benefits and the costs. © The Author(s) 2016.

  1. Multi-reader ROC studies with split-plot designs: a comparison of statistical methods.

    PubMed

    Obuchowski, Nancy A; Gallas, Brandon D; Hillis, Stephen L

    2012-12-01

    Multireader imaging trials often use a factorial design, in which study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of this design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper, the authors compare three methods of analysis for the split-plot design. Three statistical methods are presented: the Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean analysis-of-variance approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power, and confidence interval coverage of the three test statistics. The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% confidence intervals falls close to the nominal coverage for small and large sample sizes. The split-plot multireader, multicase study design can be statistically efficient compared to the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rates, similar power, and nominal confidence interval coverage, are available for this study design. Copyright © 2012 AUR. All rights reserved.

  2. A field test of three LQAS designs to assess the prevalence of acute malnutrition.

    PubMed

    Deitchler, Megan; Valadez, Joseph J; Egge, Kari; Fernandez, Soledad; Hennigan, Mary

    2007-08-01

    The conventional method for assessing the prevalence of Global Acute Malnutrition (GAM) in emergency settings is the 30 x 30 cluster-survey. This study describes alternative approaches: three Lot Quality Assurance Sampling (LQAS) designs to assess GAM. The LQAS designs were field-tested and their results compared with those from a 30 x 30 cluster-survey. Computer simulations confirmed that small clusters instead of a simple random sample could be used for LQAS assessments of GAM. Three LQAS designs were developed (33 x 6, 67 x 3, Sequential design) to assess GAM thresholds of 10, 15 and 20%. The designs were field-tested simultaneously with a 30 x 30 cluster-survey in Siraro, Ethiopia during June 2003. Using a nested study design, anthropometric, morbidity and vaccination data were collected on all children 6-59 months in sampled households. Hypothesis tests about GAM thresholds were conducted for each LQAS design. Point estimates were obtained for the 30 x 30 cluster-survey and the 33 x 6 and 67 x 3 LQAS designs. Hypothesis tests showed GAM as <10% for the 33 x 6 design and GAM as > or =10% for the 67 x 3 and Sequential designs. Point estimates for the 33 x 6 and 67 x 3 designs were similar to those of the 30 x 30 cluster-survey for GAM (6.7%, CI = 3.2-10.2%; 8.2%, CI = 4.3-12.1%, 7.4%, CI = 4.8-9.9%) and all other indicators. The CIs for the LQAS designs were only slightly wider than the CIs for the 30 x 30 cluster-survey; yet the LQAS designs required substantially less time to administer. The LQAS designs provide statistically appropriate alternatives to the more time-consuming 30 x 30 cluster-survey. However, additional field-testing is needed using independent samples rather than a nested study design.

  3. Incorporating the sampling design in weighting adjustments for panel attrition

    PubMed Central

    Chen, Qixuan; Gelman, Andrew; Tracy, Melissa; Norris, Fran H.; Galea, Sandro

    2015-01-01

    We review weighting adjustment methods for panel attrition and suggest approaches for incorporating design variables, such as strata, clusters and baseline sample weights. Design information can typically be included in attrition analysis using multilevel models or decision tree methods such as the CHAID algorithm. We use simulation to show that these weighting approaches can effectively reduce bias in the survey estimates that would occur from omitting the effect of design factors on attrition while keeping the resulted weights stable. We provide a step-by-step illustration on creating weighting adjustments for panel attrition in the Galveston Bay Recovery Study, a survey of residents in a community following a disaster, and provide suggestions to analysts in decision making about weighting approaches. PMID:26239405

  4. An anthropometric analysis of Korean male helicopter pilots for helicopter cockpit design.

    PubMed

    Lee, Wonsup; Jung, Kihyo; Jeong, Jeongrim; Park, Jangwoon; Cho, Jayoung; Kim, Heeeun; Park, Seikwon; You, Heecheon

    2013-01-01

    This study measured 21 anthropometric dimensions (ADs) of 94 Korean male helicopter pilots in their 20s to 40s and compared them with corresponding measurements of Korean male civilians and the US Army male personnel. The ADs and the sample size of the anthropometric survey were determined by a four-step process: (1) selection of ADs related to helicopter cockpit design, (2) evaluation of the importance of each AD, (3) calculation of required sample sizes for selected precision levels and (4) determination of an appropriate sample size by considering both the AD importance evaluation results and the sample size requirements. The anthropometric comparison reveals that the Korean helicopter pilots are larger (ratio of means = 1.01-1.08) and less dispersed (ratio of standard deviations = 0.71-0.93) than the Korean male civilians and that they are shorter in stature (0.99), have shorter upper limbs (0.89-0.96) and lower limbs (0.93-0.97), but are taller on sitting height, sitting eye height and acromial height (1.01-1.03), and less dispersed (0.68-0.97) than the US Army personnel. The anthropometric characteristics of Korean male helicopter pilots were compared with those of Korean male civilians and US Army male personnel. The sample size determination process and the anthropometric comparison results presented in this study are useful to design an anthropometric survey and a helicopter cockpit layout, respectively.

  5. Sampling design for long-term regional trends in marine rocky intertidal communities

    USGS Publications Warehouse

    Irvine, Gail V.; Shelley, Alice

    2013-01-01

    Probability-based designs reduce bias and allow inference of results to the pool of sites from which they were chosen. We developed and tested probability-based designs for monitoring marine rocky intertidal assemblages at Glacier Bay National Park and Preserve (GLBA), Alaska. A multilevel design was used that varied in scale and inference. The levels included aerial surveys, extensive sampling of 25 sites, and more intensive sampling of 6 sites. Aerial surveys of a subset of intertidal habitat indicated that the original target habitat of bedrock-dominated sites with slope ≤30° was rare. This unexpected finding illustrated one value of probability-based surveys and led to a shift in the target habitat type to include steeper, more mixed rocky habitat. Subsequently, we evaluated the statistical power of different sampling methods and sampling strategies to detect changes in the abundances of the predominant sessile intertidal taxa: barnacles Balanomorpha, the mussel Mytilus trossulus, and the rockweed Fucus distichus subsp. evanescens. There was greatest power to detect trends in Mytilus and lesser power for barnacles and Fucus. Because of its greater power, the extensive, coarse-grained sampling scheme was adopted in subsequent years over the intensive, fine-grained scheme. The sampling attributes that had the largest effects on power included sampling of “vertical” line transects (vs. horizontal line transects or quadrats) and increasing the number of sites. We also evaluated the power of several management-set parameters. Given equal sampling effort, sampling more sites fewer times had greater power. The information gained through intertidal monitoring is likely to be useful in assessing changes due to climate, including ocean acidification; invasive species; trampling effects; and oil spills.

  6. Sampled-data-based consensus and containment control of multiple harmonic oscillators: A motion-planning approach

    NASA Astrophysics Data System (ADS)

    Liu, Yongfang; Zhao, Yu; Chen, Guanrong

    2016-11-01

    This paper studies the distributed consensus and containment problems for a group of harmonic oscillators with a directed communication topology. First, for consensus without a leader, a class of distributed consensus protocols is designed by using motion planning and Pontryagin's principle. The proposed protocol only requires relative information measurements at the sampling instants, without requiring information exchange over the sampled interval. By using stability theory and the properties of stochastic matrices, it is proved that the distributed consensus problem can be solved in the motion planning framework. Second, for the case with multiple leaders, a class of distributed containment protocols is developed for followers such that their positions and velocities can ultimately converge to the convex hull formed by those of the leaders. Compared with the existing consensus algorithms, a remarkable advantage of the proposed sampled-data-based protocols is that the sampling periods, communication topologies and control gains are all decoupled and can be separately designed, which relaxes many restrictions in controllers design. Finally, some numerical examples are given to illustrate the effectiveness of the analytical results.

  7. Evaluating sampling designs by computer simulation: A case study with the Missouri bladderpod

    USGS Publications Warehouse

    Morrison, L.W.; Smith, D.R.; Young, C.; Nichols, D.W.

    2008-01-01

    To effectively manage rare populations, accurate monitoring data are critical. Yet many monitoring programs are initiated without careful consideration of whether chosen sampling designs will provide accurate estimates of population parameters. Obtaining accurate estimates is especially difficult when natural variability is high, or limited budgets determine that only a small fraction of the population can be sampled. The Missouri bladderpod, Lesquerella filiformis Rollins, is a federally threatened winter annual that has an aggregated distribution pattern and exhibits dramatic interannual population fluctuations. Using the simulation program SAMPLE, we evaluated five candidate sampling designs appropriate for rare populations, based on 4 years of field data: (1) simple random sampling, (2) adaptive simple random sampling, (3) grid-based systematic sampling, (4) adaptive grid-based systematic sampling, and (5) GIS-based adaptive sampling. We compared the designs based on the precision of density estimates for fixed sample size, cost, and distance traveled. Sampling fraction and cost were the most important factors determining precision of density estimates, and relative design performance changed across the range of sampling fractions. Adaptive designs did not provide uniformly more precise estimates than conventional designs, in part because the spatial distribution of L. filiformis was relatively widespread within the study site. Adaptive designs tended to perform better as sampling fraction increased and when sampling costs, particularly distance traveled, were taken into account. The rate that units occupied by L. filiformis were encountered was higher for adaptive than for conventional designs. Overall, grid-based systematic designs were more efficient and practically implemented than the others. ?? 2008 The Society of Population Ecology and Springer.

  8. Adaptive sampling in research on risk-related behaviors.

    PubMed

    Thompson, Steven K; Collins, Linda M

    2002-11-01

    This article introduces adaptive sampling designs to substance use researchers. Adaptive sampling is particularly useful when the population of interest is rare, unevenly distributed, hidden, or hard to reach. Examples of such populations are injection drug users, individuals at high risk for HIV/AIDS, and young adolescents who are nicotine dependent. In conventional sampling, the sampling design is based entirely on a priori information, and is fixed before the study begins. By contrast, in adaptive sampling, the sampling design adapts based on observations made during the survey; for example, drug users may be asked to refer other drug users to the researcher. In the present article several adaptive sampling designs are discussed. Link-tracing designs such as snowball sampling, random walk methods, and network sampling are described, along with adaptive allocation and adaptive cluster sampling. It is stressed that special estimation procedures taking the sampling design into account are needed when adaptive sampling has been used. These procedures yield estimates that are considerably better than conventional estimates. For rare and clustered populations adaptive designs can give substantial gains in efficiency over conventional designs, and for hidden populations link-tracing and other adaptive procedures may provide the only practical way to obtain a sample large enough for the study objectives.

  9. Design and validation of a wind tunnel system for odour sampling on liquid area sources.

    PubMed

    Capelli, L; Sironi, S; Del Rosso, R; Céntola, P

    2009-01-01

    The aim of this study is to describe the methods adopted for the design and the experimental validation of a wind tunnel, a sampling system suitable for the collection of gaseous samples on passive area sources, which allows to simulate wind action on the surface to be monitored. The first step of the work was the study of the air velocity profiles. The second step of the work consisted in the validation of the sampling system. For this purpose, the odour concentration of some air samples collected by means of the wind tunnel was measured by dynamic olfactometry. The results of the air velocity measurements show that the wind tunnel design features enabled the achievement of a uniform and homogeneous air flow through the hood. Moreover, the laboratory tests showed a very good correspondence between the odour concentration values measured at the wind tunnel outlet and the odour concentration values predicted by the application of a specific volatilization model, based on the Prandtl boundary layer theory. The agreement between experimental and theoretical trends demonstrate that the studied wind tunnel represents a suitable sampling system for the simulation of specific odour emission rates from liquid area sources without outward flow.

  10. An adaptive sampling method for variable-fidelity surrogate models using improved hierarchical kriging

    NASA Astrophysics Data System (ADS)

    Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli

    2018-01-01

    Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.

  11. Design, analysis and presentation of factorial randomised controlled trials

    PubMed Central

    Montgomery, Alan A; Peters, Tim J; Little, Paul

    2003-01-01

    Background The evaluation of more than one intervention in the same randomised controlled trial can be achieved using a parallel group design. However this requires increased sample size and can be inefficient, especially if there is also interest in considering combinations of the interventions. An alternative may be a factorial trial, where for two interventions participants are allocated to receive neither intervention, one or the other, or both. Factorial trials require special considerations, however, particularly at the design and analysis stages. Discussion Using a 2 × 2 factorial trial as an example, we present a number of issues that should be considered when planning a factorial trial. The main design issue is that of sample size. Factorial trials are most often powered to detect the main effects of interventions, since adequate power to detect plausible interactions requires greatly increased sample sizes. The main analytical issues relate to the investigation of main effects and the interaction between the interventions in appropriate regression models. Presentation of results should reflect the analytical strategy with an emphasis on the principal research questions. We also give an example of how baseline and follow-up data should be presented. Lastly, we discuss the implications of the design, analytical and presentational issues covered. Summary Difficulties in interpreting the results of factorial trials if an influential interaction is observed is the cost of the potential for efficient, simultaneous consideration of two or more interventions. Factorial trials can in principle be designed to have adequate power to detect realistic interactions, and in any case they are the only design that allows such effects to be investigated. PMID:14633287

  12. Polymethylmethacrylate (PMMA) Material Test Results for the Capillary Flow Experiments (CFE)

    NASA Technical Reports Server (NTRS)

    Lerch, Bradley A.; Thesken, John C.; Bunnell, Charles T.

    2007-01-01

    In support of the Capillary Flow Experiments (CFE) program, several polymethylmethacrylate (PMMA) flight vessels were constructed. Some vessels used a multipiece design, which was chemically welded together. Due to questions regarding the effects of the experiment fluid (silicone oil) on the weld integrity, a series of tests were conducted to provide evidence of the adequacy of the current vessel design. Tensile tests were conducted on PMMA samples that were both in the as-received condition, and also aged in air or oil for up to 8 weeks. Both welded and unwelded samples were examined. Fracture of the joints was studied using notched tensile specimens and Brazilian disk tests. Results showed that aging had no effect on tensile properties. While the welded samples were weaker than the base parent material, the weld strength was found to be further degraded by bubbles in the weld zone. Finally a fracture analysis using the worst-case fracture conditions of the vessel was performed, and the vessel design was found to have a factor of three safety margin.

  13. Adapted random sampling patterns for accelerated MRI.

    PubMed

    Knoll, Florian; Clason, Christian; Diwoky, Clemens; Stollberger, Rudolf

    2011-02-01

    Variable density random sampling patterns have recently become increasingly popular for accelerated imaging strategies, as they lead to incoherent aliasing artifacts. However, the design of these sampling patterns is still an open problem. Current strategies use model assumptions like polynomials of different order to generate a probability density function that is then used to generate the sampling pattern. This approach relies on the optimization of design parameters which is very time consuming and therefore impractical for daily clinical use. This work presents a new approach that generates sampling patterns by making use of power spectra of existing reference data sets and hence requires neither parameter tuning nor an a priori mathematical model of the density of sampling points. The approach is validated with downsampling experiments, as well as with accelerated in vivo measurements. The proposed approach is compared with established sampling patterns, and the generalization potential is tested by using a range of reference images. Quantitative evaluation is performed for the downsampling experiments using RMS differences to the original, fully sampled data set. Our results demonstrate that the image quality of the method presented in this paper is comparable to that of an established model-based strategy when optimization of the model parameter is carried out and yields superior results to non-optimized model parameters. However, no random sampling pattern showed superior performance when compared to conventional Cartesian subsampling for the considered reconstruction strategy.

  14. Influence function based variance estimation and missing data issues in case-cohort studies.

    PubMed

    Mark, S D; Katki, H

    2001-12-01

    Recognizing that the efficiency in relative risk estimation for the Cox proportional hazards model is largely constrained by the total number of cases, Prentice (1986) proposed the case-cohort design in which covariates are measured on all cases and on a random sample of the cohort. Subsequent to Prentice, other methods of estimation and sampling have been proposed for these designs. We formalize an approach to variance estimation suggested by Barlow (1994), and derive a robust variance estimator based on the influence function. We consider the applicability of the variance estimator to all the proposed case-cohort estimators, and derive the influence function when known sampling probabilities in the estimators are replaced by observed sampling fractions. We discuss the modifications required when cases are missing covariate information. The missingness may occur by chance, and be completely at random; or may occur as part of the sampling design, and depend upon other observed covariates. We provide an adaptation of S-plus code that allows estimating influence function variances in the presence of such missing covariates. Using examples from our current case-cohort studies on esophageal and gastric cancer, we illustrate how our results our useful in solving design and analytic issues that arise in practice.

  15. CALiPER Report 20.3: Robustness of LED PAR38 Lamps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poplawski, Michael E.; Royer, Michael P.; Brown, Charles C.

    2014-12-01

    Three samples of 40 of the Series 20 PAR38 lamps underwent multi-stress testing, whereby samples were subjected to increasing levels of simultaneous thermal, humidity, electrical, and vibrational stress. The results do not explicitly predict expected lifetime or reliability, but they can be compared with one another, as well as with benchmark conventional products, to assess the relative robustness of the product designs. On average, the 32 LED lamp models tested were substantially more robust than the conventional benchmark lamps. As with other performance attributes, however, there was great variability in the robustness and design maturity of the LED lamps. Severalmore » LED lamp samples failed within the first one or two levels of the ten-level stress plan, while all three samples of some lamp models completed all ten levels. One potential area of improvement is design maturity, given that more than 25% of the lamp models demonstrated a difference in failure level for the three samples that was greater than or equal to the maximum for the benchmarks. At the same time, the fact that nearly 75% of the lamp models exhibited better design maturity than the benchmarks is noteworthy, given the relative stage of development for the technology.« less

  16. A new apparatus design for high temperature (up to 950°C) quasi-elastic neutron scattering in a controlled gaseous environment.

    PubMed

    al-Wahish, Amal; Armitage, D; al-Binni, U; Hill, B; Mills, R; Jalarvo, N; Santodonato, L; Herwig, K W; Mandrus, D

    2015-09-01

    A design for a sample cell system suitable for high temperature Quasi-Elastic Neutron Scattering (QENS) experiments is presented. The apparatus was developed at the Spallation Neutron Source in Oak Ridge National Lab where it is currently in use. The design provides a special sample cell environment under controlled humid or dry gas flow over a wide range of temperature up to 950 °C. Using such a cell, chemical, dynamical, and physical changes can be studied in situ under various operating conditions. While the cell combined with portable automated gas environment system is especially useful for in situ studies of microscopic dynamics under operational conditions that are similar to those of solid oxide fuel cells, it can additionally be used to study a wide variety of materials, such as high temperature proton conductors. The cell can also be used in many different neutron experiments when a suitable sample holder material is selected. The sample cell system has recently been used to reveal fast dynamic processes in quasi-elastic neutron scattering experiments, which standard probes (such as electrochemical impedance spectroscopy) could not detect. In this work, we outline the design of the sample cell system and present results demonstrating its abilities in high temperature QENS experiments.

  17. Bayesian sample size calculations in phase II clinical trials using a mixture of informative priors.

    PubMed

    Gajewski, Byron J; Mayo, Matthew S

    2006-08-15

    A number of researchers have discussed phase II clinical trials from a Bayesian perspective. A recent article by Mayo and Gajewski focuses on sample size calculations, which they determine by specifying an informative prior distribution and then calculating a posterior probability that the true response will exceed a prespecified target. In this article, we extend these sample size calculations to include a mixture of informative prior distributions. The mixture comes from several sources of information. For example consider information from two (or more) clinicians. The first clinician is pessimistic about the drug and the second clinician is optimistic. We tabulate the results for sample size design using the fact that the simple mixture of Betas is a conjugate family for the Beta- Binomial model. We discuss the theoretical framework for these types of Bayesian designs and show that the Bayesian designs in this paper approximate this theoretical framework. Copyright 2006 John Wiley & Sons, Ltd.

  18. ECCM Scheme against Interrupted Sampling Repeater Jammer Based on Parameter-Adjusted Waveform Design

    PubMed Central

    Wei, Zhenhua; Peng, Bo; Shen, Rui

    2018-01-01

    Interrupted sampling repeater jamming (ISRJ) is an effective way of deceiving coherent radar sensors, especially for linear frequency modulated (LFM) radar. In this paper, for a simplified scenario with a single jammer, we propose a dynamic electronic counter-counter measure (ECCM) scheme based on jammer parameter estimation and transmitted signal design. Firstly, the LFM waveform is transmitted to estimate the main jamming parameters by investigating the discontinuousness of the ISRJ’s time-frequency (TF) characteristics. Then, a parameter-adjusted intra-pulse frequency coded signal, whose ISRJ signal after matched filtering only forms a single false target, is designed adaptively according to the estimated parameters, i.e., sampling interval, sampling duration and repeater times. Ultimately, for typical jamming scenes with different jamming signal ratio (JSR) and duty cycle, we propose two particular ISRJ suppression approaches. Simulation results validate the effective performance of the proposed scheme for countering the ISRJ, and the trade-off relationship between the two approaches is demonstrated. PMID:29642508

  19. A Bayesian model for estimating population means using a link-tracing sampling design.

    PubMed

    St Clair, Katherine; O'Connell, Daniel

    2012-03-01

    Link-tracing sampling designs can be used to study human populations that contain "hidden" groups who tend to be linked together by a common social trait. These links can be used to increase the sampling intensity of a hidden domain by tracing links from individuals selected in an initial wave of sampling to additional domain members. Chow and Thompson (2003, Survey Methodology 29, 197-205) derived a Bayesian model to estimate the size or proportion of individuals in the hidden population for certain link-tracing designs. We propose an addition to their model that will allow for the modeling of a quantitative response. We assess properties of our model using a constructed population and a real population of at-risk individuals, both of which contain two domains of hidden and nonhidden individuals. Our results show that our model can produce good point and interval estimates of the population mean and domain means when our population assumptions are satisfied. © 2011, The International Biometric Society.

  20. Development study of the X-ray scattering properties of a group of optically polished flat samples

    NASA Technical Reports Server (NTRS)

    Froechtenigt, J. F.

    1973-01-01

    A group of twelve optically polished flat samples were used to study the scattering of X-rays. The X-ray beam reflected from the twelve optical flat samples was analyzed by means of a long vacuum system of special design for these tests. The scattering measurements were made at 8.34A and 0.92 deg angle of incidence. The results for ten of the samples are comparable, the two exceptions being the fire polished samples.

  1. Practical characteristics of adaptive design in phase 2 and 3 clinical trials.

    PubMed

    Sato, A; Shimura, M; Gosho, M

    2018-04-01

    Adaptive design methods are expected to be ethical, reflect real medical practice, increase the likelihood of research and development success and reduce the allocation of patients into ineffective treatment groups by the early termination of clinical trials. However, the comprehensive details regarding which types of clinical trials will include adaptive designs remain unclear. We examined the practical characteristics of adaptive design used in clinical trials. We conducted a literature search of adaptive design clinical trials published from 2012 to 2015 using PubMed, EMBASE, and the Cochrane Central Register of Controlled Trials, with common search terms related to adaptive design. We systematically assessed the types and characteristics of adaptive designs and disease areas employed in the adaptive design trials. Our survey identified 245 adaptive design clinical trials. The number of trials by the publication year increased from 2012 to 2013 and did not greatly change afterwards. The most frequently used adaptive design was group sequential design (n = 222, 90.6%), especially for neoplasm or cardiovascular disease trials. Among the other types of adaptive design, adaptive dose/treatment group selection (n = 21, 8.6%) and adaptive sample-size adjustment (n = 19, 7.8%) were frequently used. The adaptive randomization (n = 8, 3.3%) and adaptive seamless design (n = 6, 2.4%) were less frequent. Adaptive dose/treatment group selection and adaptive sample-size adjustment were frequently used (up to 23%) in "certain infectious and parasitic diseases," "diseases of nervous system," and "mental and behavioural disorders" in comparison with "neoplasms" (<6.6%). For "mental and behavioural disorders," adaptive randomization was used in two trials of eight trials in total (25%). Group sequential design and adaptive sample-size adjustment were used frequently in phase 3 trials or in trials where study phase was not specified, whereas the other types of adaptive designs were used more in phase 2 trials. Approximately 82% (202 of 245 trials) resulted in early termination at the interim analysis. Among the 202 trials, 132 (54% of 245 trials) had fewer randomized patients than initially planned. This result supports the motive to use adaptive design to make study durations shorter and include a smaller number of subjects. We found that adaptive designs have been applied to clinical trials in various therapeutic areas and interventions. The applications were frequently reported in neoplasm or cardiovascular clinical trials. The adaptive dose/treatment group selection and sample-size adjustment are increasingly common, and these adaptations generally follow the Food and Drug Administration's (FDA's) recommendations. © 2017 John Wiley & Sons Ltd.

  2. Information content of household-stratified epidemics.

    PubMed

    Kinyanjui, T M; Pellis, L; House, T

    2016-09-01

    Household structure is a key driver of many infectious diseases, as well as a natural target for interventions such as vaccination programs. Many theoretical and conceptual advances on household-stratified epidemic models are relatively recent, but have successfully managed to increase the applicability of such models to practical problems. To be of maximum realism and hence benefit, they require parameterisation from epidemiological data, and while household-stratified final size data has been the traditional source, increasingly time-series infection data from households are becoming available. This paper is concerned with the design of studies aimed at collecting time-series epidemic data in order to maximize the amount of information available to calibrate household models. A design decision involves a trade-off between the number of households to enrol and the sampling frequency. Two commonly used epidemiological study designs are considered: cross-sectional, where different households are sampled at every time point, and cohort, where the same households are followed over the course of the study period. The search for an optimal design uses Bayesian computationally intensive methods to explore the joint parameter-design space combined with the Shannon entropy of the posteriors to estimate the amount of information in each design. For the cross-sectional design, the amount of information increases with the sampling intensity, i.e., the designs with the highest number of time points have the most information. On the other hand, the cohort design often exhibits a trade-off between the number of households sampled and the intensity of follow-up. Our results broadly support the choices made in existing epidemiological data collection studies. Prospective problem-specific use of our computational methods can bring significant benefits in guiding future study designs. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  3. ADAPTIVE MATCHING IN RANDOMIZED TRIALS AND OBSERVATIONAL STUDIES

    PubMed Central

    van der Laan, Mark J.; Balzer, Laura B.; Petersen, Maya L.

    2014-01-01

    SUMMARY In many randomized and observational studies the allocation of treatment among a sample of n independent and identically distributed units is a function of the covariates of all sampled units. As a result, the treatment labels among the units are possibly dependent, complicating estimation and posing challenges for statistical inference. For example, cluster randomized trials frequently sample communities from some target population, construct matched pairs of communities from those included in the sample based on some metric of similarity in baseline community characteristics, and then randomly allocate a treatment and a control intervention within each matched pair. In this case, the observed data can neither be represented as the realization of n independent random variables, nor, contrary to current practice, as the realization of n/2 independent random variables (treating the matched pair as the independent sampling unit). In this paper we study estimation of the average causal effect of a treatment under experimental designs in which treatment allocation potentially depends on the pre-intervention covariates of all units included in the sample. We define efficient targeted minimum loss based estimators for this general design, present a theorem that establishes the desired asymptotic normality of these estimators and allows for asymptotically valid statistical inference, and discuss implementation of these estimators. We further investigate the relative asymptotic efficiency of this design compared with a design in which unit-specific treatment assignment depends only on the units’ covariates. Our findings have practical implications for the optimal design and analysis of pair matched cluster randomized trials, as well as for observational studies in which treatment decisions may depend on characteristics of the entire sample. PMID:25097298

  4. Design of Multishell Sampling Schemes with Uniform Coverage in Diffusion MRI

    PubMed Central

    Caruyer, Emmanuel; Lenglet, Christophe; Sapiro, Guillermo; Deriche, Rachid

    2017-01-01

    Purpose In diffusion MRI, a technique known as diffusion spectrum imaging reconstructs the propagator with a discrete Fourier transform, from a Cartesian sampling of the diffusion signal. Alternatively, it is possible to directly reconstruct the orientation distribution function in q-ball imaging, providing so-called high angular resolution diffusion imaging. In between these two techniques, acquisitions on several spheres in q-space offer an interesting trade-off between the angular resolution and the radial information gathered in diffusion MRI. A careful design is central in the success of multishell acquisition and reconstruction techniques. Methods The design of acquisition in multishell is still an open and active field of research, however. In this work, we provide a general method to design multishell acquisition with uniform angular coverage. This method is based on a generalization of electrostatic repulsion to multishell. Results We evaluate the impact of our method using simulations, on the angular resolution in one and two bundles of fiber configurations. Compared to more commonly used radial sampling, we show that our method improves the angular resolution, as well as fiber crossing discrimination. Discussion We propose a novel method to design sampling schemes with optimal angular coverage and show the positive impact on angular resolution in diffusion MRI. PMID:23625329

  5. Statistical considerations in monitoring birds over large areas

    USGS Publications Warehouse

    Johnson, D.H.

    2000-01-01

    The proper design of a monitoring effort depends primarily on the objectives desired, constrained by the resources available to conduct the work. Typically, managers have numerous objectives, such as determining abundance of the species, detecting changes in population size, evaluating responses to management activities, and assessing habitat associations. A design that is optimal for one objective will likely not be optimal for others. Careful consideration of the importance of the competing objectives may lead to a design that adequately addresses the priority concerns, although it may not be optimal for any individual objective. Poor design or inadequate sample sizes may result in such weak conclusions that the effort is wasted. Statistical expertise can be used at several stages, such as estimating power of certain hypothesis tests, but is perhaps most useful in fundamental considerations of describing objectives and designing sampling plans.

  6. Recent progresses in outcome-dependent sampling with failure time data.

    PubMed

    Ding, Jieli; Lu, Tsui-Shan; Cai, Jianwen; Zhou, Haibo

    2017-01-01

    An outcome-dependent sampling (ODS) design is a retrospective sampling scheme where one observes the primary exposure variables with a probability that depends on the observed value of the outcome variable. When the outcome of interest is failure time, the observed data are often censored. By allowing the selection of the supplemental samples depends on whether the event of interest happens or not and oversampling subjects from the most informative regions, ODS design for the time-to-event data can reduce the cost of the study and improve the efficiency. We review recent progresses and advances in research on ODS designs with failure time data. This includes researches on ODS related designs like case-cohort design, generalized case-cohort design, stratified case-cohort design, general failure-time ODS design, length-biased sampling design and interval sampling design.

  7. Recent progresses in outcome-dependent sampling with failure time data

    PubMed Central

    Ding, Jieli; Lu, Tsui-Shan; Cai, Jianwen; Zhou, Haibo

    2016-01-01

    An outcome-dependent sampling (ODS) design is a retrospective sampling scheme where one observes the primary exposure variables with a probability that depends on the observed value of the outcome variable. When the outcome of interest is failure time, the observed data are often censored. By allowing the selection of the supplemental samples depends on whether the event of interest happens or not and oversampling subjects from the most informative regions, ODS design for the time-to-event data can reduce the cost of the study and improve the efficiency. We review recent progresses and advances in research on ODS designs with failure time data. This includes researches on ODS related designs like case–cohort design, generalized case–cohort design, stratified case–cohort design, general failure-time ODS design, length-biased sampling design and interval sampling design. PMID:26759313

  8. Diverse Protocols for Correlative Super-Resolution Fluorescence Imaging and Electron Microscopy of Cells and Tissue

    DTIC Science & Technology

    2016-05-25

    tissue is critical to biology. Many factors determine optimal experimental design, including attainable localization precision, ultrastructural...both imaging modalities. Examples include: weak tissue preservation protocols resulting in poor ultrastructure, e.g. mitochondrial cristae membranes...tension effects during sample drying that may result in artifacts44. Samples dried in the presence of polyvinyl alcohol do not have the haziness

  9. Atomic resolution images of graphite in air

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grigg, D.A.; Shedd, G.M.; Griffis, D.

    One sample used for proof of operation for atomic resolution in STM is highly oriented pyrolytic graphite (HOPG). This sample has been imaged with many different STM`s obtaining similar results. Atomic resolution images of HOPG have now been obtained using an STM designed and built at the Precision Engineering Center. This paper discusses the theoretical predictions and experimental results obtained in imaging of HOPG.

  10. Evaluation of NASA speech encoder

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Techniques developed by NASA for spaceflight instrumentation were used in the design of a quantizer for speech-decoding. Computer simulation of the actions of the quantizer was tested with synthesized and real speech signals. Results were evaluated by a phometician. Topics discussed include the relationship between the number of quantizer levels and the required sampling rate; reconstruction of signals; digital filtering; speech recording, sampling, and storage, and processing results.

  11. The effect of Fisher information matrix approximation methods in population optimal design calculations.

    PubMed

    Strömberg, Eric A; Nyberg, Joakim; Hooker, Andrew C

    2016-12-01

    With the increasing popularity of optimal design in drug development it is important to understand how the approximations and implementations of the Fisher information matrix (FIM) affect the resulting optimal designs. The aim of this work was to investigate the impact on design performance when using two common approximations to the population model and the full or block-diagonal FIM implementations for optimization of sampling points. Sampling schedules for two example experiments based on population models were optimized using the FO and FOCE approximations and the full and block-diagonal FIM implementations. The number of support points was compared between the designs for each example experiment. The performance of these designs based on simulation/estimations was investigated by computing bias of the parameters as well as through the use of an empirical D-criterion confidence interval. Simulations were performed when the design was computed with the true parameter values as well as with misspecified parameter values. The FOCE approximation and the Full FIM implementation yielded designs with more support points and less clustering of sample points than designs optimized with the FO approximation and the block-diagonal implementation. The D-criterion confidence intervals showed no performance differences between the full and block diagonal FIM optimal designs when assuming true parameter values. However, the FO approximated block-reduced FIM designs had higher bias than the other designs. When assuming parameter misspecification in the design evaluation, the FO Full FIM optimal design was superior to the FO block-diagonal FIM design in both of the examples.

  12. MO-D-213-07: RadShield: Semi- Automated Calculation of Air Kerma Rate and Barrier Thickness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeLorenzo, M; Wu, D; Rutel, I

    2015-06-15

    Purpose: To develop the first Java-based semi-automated calculation program intended to aid professional radiation shielding design. Air-kerma rate and barrier thickness calculations are performed by implementing NCRP Report 147 formalism into a Graphical User Interface (GUI). The ultimate aim of this newly created software package is to reduce errors and improve radiographic and fluoroscopic room designs over manual approaches. Methods: Floor plans are first imported as images into the RadShield software program. These plans serve as templates for drawing barriers, occupied regions and x-ray tube locations. We have implemented sub-GUIs that allow the specification in regions and equipment for occupancymore » factors, design goals, number of patients, primary beam directions, source-to-patient distances and workload distributions. Once the user enters the above parameters, the program automatically calculates air-kerma rate at sampled points beyond all barriers. For each sample point, a corresponding minimum barrier thickness is calculated to meet the design goal. RadShield allows control over preshielding, sample point location and material types. Results: A functional GUI package was developed and tested. Examination of sample walls and source distributions yields a maximum percent difference of less than 0.1% between hand-calculated air-kerma rates and RadShield. Conclusion: The initial results demonstrated that RadShield calculates air-kerma rates and required barrier thicknesses with reliable accuracy and can be used to make radiation shielding design more efficient and accurate. This newly developed approach differs from conventional calculation methods in that it finds air-kerma rates and thickness requirements for many points outside the barriers, stores the information and selects the largest value needed to comply with NCRP Report 147 design goals. Floor plans, parameters, designs and reports can be saved and accessed later for modification and recalculation. We have confirmed that this software accurately calculates air-kerma rates and required barrier thicknesses for diagnostic radiography and fluoroscopic rooms.« less

  13. ON FARM SAMPLING FOR SALMONELLA: IMPACT OF METHOD AND DESIGN ON RESULTS

    USDA-ARS?s Scientific Manuscript database

    Current sampling for the National Antimicrobial Resistance Monitoring System (NARMS) is comprised primarily of receipt of Salmonella isolates from the USDA FSIS as part of their regulatory compliance testing. These isolates are received from all commodities and product classes. Isolates are charac...

  14. Survey of rural, private wells. Statistical design

    USGS Publications Warehouse

    Mehnert, Edward; Schock, Susan C.; ,

    1991-01-01

    Half of Illinois' 38 million acres were planted in corn and soybeans in 1988. On the 19 million acres planted in corn and soybeans, approximately 1 million tons of nitrogen fertilizer and 50 million pounds of pesticides were applied. Because groundwater is the water supply for over 90 percent of rural Illinois, the occurrence of agricultural chemicals in groundwater in Illinois is of interest to the agricultural community, the public, and regulatory agencies. The occurrence of agricultural chemicals in groundwater is well documented. However, the extent of this contamination still needs to be defined. This can be done by randomly sampling wells across a geographic area. Key elements of a random, water-well sampling program for regional groundwater quality include the overall statistical design of the program, definition of the sample population, selection of wells to be sampled, and analysis of survey results. These elements must be consistent with the purpose for conducting the program; otherwise, the program will not provide the desired information. The need to carefully design and conduct a sampling program becomes readily apparent when one considers the high cost of collecting and analyzing a sample. For a random sampling program conducted in Illinois, the key elements, as well as the limitations imposed by available information, are described.

  15. Visual Sample Plan Version 7.0 User's Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzke, Brett D.; Newburn, Lisa LN; Hathaway, John E.

    2014-03-01

    User's guide for VSP 7.0 This user's guide describes Visual Sample Plan (VSP) Version 7.0 and provides instructions for using the software. VSP selects the appropriate number and location of environmental samples to ensure that the results of statistical tests performed to provide input to risk decisions have the required confidence and performance. VSP Version 7.0 provides sample-size equations or algorithms needed by specific statistical tests appropriate for specific environmental sampling objectives. It also provides data quality assessment and statistical analysis functions to support evaluation of the data and determine whether the data support decisions regarding sites suspected of contamination.more » The easy-to-use program is highly visual and graphic. VSP runs on personal computers with Microsoft Windows operating systems (XP, Vista, Windows 7, and Windows 8). Designed primarily for project managers and users without expertise in statistics, VSP is applicable to two- and three-dimensional populations to be sampled (e.g., rooms and buildings, surface soil, a defined layer of subsurface soil, water bodies, and other similar applications) for studies of environmental quality. VSP is also applicable for designing sampling plans for assessing chem/rad/bio threat and hazard identification within rooms and buildings, and for designing geophysical surveys for unexploded ordnance (UXO) identification.« less

  16. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST)

    PubMed Central

    Xu, Chonggang; Gertner, George

    2013-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037

  17. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST).

    PubMed

    Xu, Chonggang; Gertner, George

    2011-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.

  18. A fully automated colorimetric sensing device using smartphone for biomolecular quantification

    NASA Astrophysics Data System (ADS)

    Dutta, Sibasish; Nath, Pabitra

    2017-03-01

    In the present work, the use of smartphone for colorimetric quantification of biomolecules has been demonstrated. As a proof-of-concept, BSA protein and carbohydrate have been used as biomolecular sample. BSA protein and carbohydrate at different concentrations have been treated with Lowry's reagent and Anthrone's reagent respectively . The change in color of the reagent-treated samples at different concentrations have been recorded with the camera of a smartphone in combination with a custom designed optomechanical hardware attachment. This change in color of the reagent-treated samples has been correlated with color channels of two different color models namely RGB (Red Green Blue) and HSV (Hue Saturation and Value) model. In addition to that, the change in color intensity has also been correlated with the grayscale value for each of the imaged sample. A custom designed android app has been developed to quantify the bimolecular concentration and display the result in the phone itself. The obtained results have been compared with that of standard spectrophotometer usually considered for the purpose and highly reliable data have been obtained with the designed sensor. The device is robust, portable and low cost as compared to its commercially available counterparts. The data obtained from the sensor can be transmitted to anywhere in the world through the existing cellular network. It is envisioned that the designed sensing device would find wide range of applications in the field of analytical and bioanalytical sensing research.

  19. Monitoring multiple species: Estimating state variables and exploring the efficacy of a monitoring program

    USGS Publications Warehouse

    Mattfeldt, S.D.; Bailey, L.L.; Grant, E.H.C.

    2009-01-01

    Monitoring programs have the potential to identify population declines and differentiate among the possible cause(s) of these declines. Recent criticisms regarding the design of monitoring programs have highlighted a failure to clearly state objectives and to address detectability and spatial sampling issues. Here, we incorporate these criticisms to design an efficient monitoring program whose goals are to determine environmental factors which influence the current distribution and measure change in distributions over time for a suite of amphibians. In designing the study we (1) specified a priori factors that may relate to occupancy, extinction, and colonization probabilities and (2) used the data collected (incorporating detectability) to address our scientific questions and adjust our sampling protocols. Our results highlight the role of wetland hydroperiod and other local covariates in the probability of amphibian occupancy. There was a change in overall occupancy probabilities for most species over the first three years of monitoring. Most colonization and extinction estimates were constant over time (years) and space (among wetlands), with one notable exception: local extinction probabilities for Rana clamitans were lower for wetlands with longer hydroperiods. We used information from the target system to generate scenarios of population change and gauge the ability of the current sampling to meet monitoring goals. Our results highlight the limitations of the current sampling design, emphasizing the need for long-term efforts, with periodic re-evaluation of the program in a framework that can inform management decisions.

  20. Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.

    PubMed

    Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto

    2007-07-01

    To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.

  1. Positive outcome expectancy mediates the relationship between social influence and Internet addiction among senior high-school students.

    PubMed

    Lin, Min-Pei; Wu, Jo Yung-Wei; Chen, Chao-Jui; You, Jianing

    2018-06-28

    Background and aims Based on the foundations of Bandura's social cognitive theory and theory of triadic influence (TTI) theoretical framework, this study was designed to examine the mediating role of positive outcome expectancy of Internet use in the relationship between social influence and Internet addiction (IA) in a large representative sample of senior high-school students in Taiwan. Methods Using a cross-sectional design, 1,922 participants were recruited from senior high schools throughout Taiwan using both stratified and cluster sampling, and a comprehensive survey was administered. Results Structural equation modeling and bootstrap analyses results showed that IA severity was significantly and positively predicted by social influence, and fully mediated through positive outcome expectancy of Internet use. Discussion and conclusions The results not only support Bandura's social cognitive theory and TTI framework, but can also serve as a reference to help educational agencies and mental health organizations design programs and create policies that will help in the prevention of IA among adolescents.

  2. Impact Test and Simulation of Energy Absorbing Concepts for Earth Entry Vehicles

    NASA Technical Reports Server (NTRS)

    Billings, Marcus D.; Fasanella, Edwin L.; Kellas, Sotiris

    2001-01-01

    Nonlinear dynamic finite element simulations have been performed to aid in the design of an energy absorbing concept for a highly reliable passive Earth Entry Vehicle (EEV) that will directly impact the Earth without a parachute. EEV's are designed to return materials from asteroids, comets, or planets for laboratory analysis on Earth. The EEV concept uses an energy absorbing cellular structure designed to contain and limit the acceleration of space exploration samples during Earth impact. The spherical shaped cellular structure is composed of solid hexagonal and pentagonal foam-filled cells with hybrid graphite- epoxy/Kevlar cell walls. Space samples fit inside a smaller sphere at the center of the EEV's cellular structure. Comparisons of analytical predictions using MSC,Dytran with test results obtained from impact tests performed at NASA Langley Research Center were made for three impact velocities ranging from 32 to 40 m/s. Acceleration and deformation results compared well with the test results. These finite element models will be useful for parametric studies of off-nominal impact conditions.

  3. Application of Student Book Based On Integrated Learning Model Of Networked Type With Heart Electrical Activity Theme For Junior High School

    NASA Astrophysics Data System (ADS)

    Gusnedi, G.; Ratnawulan, R.; Triana, L.

    2018-04-01

    The purpose of this study is to determine the effect of the use of Integrated Science IPA books Using Networked Learning Model of knowledge competence through improved learning outcomes obtained. The experimental design used is one group pre test post test design to know the results before and after being treated. The number of samples used is one class that is divided into two categories of initial ability to see the improvement of knowledge competence. The sample used was taken from the students of grade VIII SMPN 2 Sawahlunto, Indonesia. The results of this study indicate that most students have increased knowledge competence.

  4. Developing effective sampling designs for monitoring natural resources in Alaskan national parks: an example using simulations and vegetation data

    USGS Publications Warehouse

    Thompson, William L.; Miller, Amy E.; Mortenson, Dorothy C.; Woodward, Andrea

    2011-01-01

    Monitoring natural resources in Alaskan national parks is challenging because of their remoteness, limited accessibility, and high sampling costs. We describe an iterative, three-phased process for developing sampling designs based on our efforts to establish a vegetation monitoring program in southwest Alaska. In the first phase, we defined a sampling frame based on land ownership and specific vegetated habitats within the park boundaries and used Path Distance analysis tools to create a GIS layer that delineated portions of each park that could be feasibly accessed for ground sampling. In the second phase, we used simulations based on landcover maps to identify size and configuration of the ground sampling units (single plots or grids of plots) and to refine areas to be potentially sampled. In the third phase, we used a second set of simulations to estimate sample size and sampling frequency required to have a reasonable chance of detecting a minimum trend in vegetation cover for a specified time period and level of statistical confidence. Results of the first set of simulations indicated that a spatially balanced random sample of single plots from the most common landcover types yielded the most efficient sampling scheme. Results of the second set of simulations were compared with field data and indicated that we should be able to detect at least a 25% change in vegetation attributes over 31. years by sampling 8 or more plots per year every five years in focal landcover types. This approach would be especially useful in situations where ground sampling is restricted by access.

  5. Improved chip design for integrated solid-phase microextraction in on-line proteomic sample preparation.

    PubMed

    Bergkvist, Jonas; Ekström, Simon; Wallman, Lars; Löfgren, Mikael; Marko-Varga, György; Nilsson, Johan; Laurell, Thomas

    2002-04-01

    A recently introduced silicon microextraction chip (SMEC), used for on-line proteomic sample preparation, has proved to facilitate the process of protein identification by sample clean up and enrichment of peptides. It is demonstrated that a novel grid-SMEC design improves the operating characteristics for solid-phase microextraction, by reducing dispersion effects and thereby improving the sample preparation conditions. The structures investigated in this paper are treated both numerically and experimentally. The numerical approach is based on finite element analysis of the microfluidic flow in the microchip. The analysis is accomplished by use of the computational fluid dynamics-module FLOTRAN in the ANSYS software package. The modeling and analysis of the previously reported weir-SMEC design indicates some severe drawbacks, that can be reduced by changing the microextraction chip geometry to the grid-SMEC design. The overall analytical performance was thereby improved and also verified by experimental work. Matrix-assisted laser desorption/ionization mass spectra of model peptides extracted from both the weir-SMEC and the new grid-SMEC support the numerical analysis results. Further use of numerical modeling and analysis of the SMEC structures is also discussed and suggested in this work.

  6. Prediction-based sampled-data H∞ controller design for attitude stabilisation of a rigid spacecraft with disturbances

    NASA Astrophysics Data System (ADS)

    Zhu, Baolong; Zhang, Zhiping; Zhou, Ding; Ma, Jie; Li, Shunli

    2017-08-01

    This paper investigates the H∞ control problem of the attitude stabilisation of a rigid spacecraft with external disturbances using prediction-based sampled-data control strategy. Aiming to achieve a 'virtual' closed-loop system, a type of parameterised sampled-data controller is designed by introducing a prediction mechanism. The resultant closed-loop system is equivalent to a hybrid system featured by a continuous-time and an impulsive differential system. By using a time-varying Lyapunov functional, a generalised bounded real lemma (GBRL) is first established for a kind of impulsive differential system. Based on this GBRL and Lyapunov functional approach, a sufficient condition is derived to guarantee the closed-loop system to be asymptotically stable and to achieve a prescribed H∞ performance. In addition, the controller parameter tuning is cast into a convex optimisation problem. Simulation and comparative results are provided to illustrate the effectiveness of the developed control scheme.

  7. Research on Rigid Body Motion Tracing in Space based on NX MCD

    NASA Astrophysics Data System (ADS)

    Wang, Junjie; Dai, Chunxiang; Shi, Karen; Qin, Rongkang

    2018-03-01

    In the use of MCD (Mechatronics Concept Designer) which is a module belong to SIEMENS Ltd industrial design software UG (Unigraphics NX), user can define rigid body and kinematic joint to make objects move according to the existing plan in simulation. At this stage, user may have the desire to see the path of some points in the moving object intuitively. In response to this requirement, this paper will compute the pose through the transformation matrix which can be available from the solver engine, and then fit these sampling points through B-spline curve. Meanwhile, combined with the actual constraints of rigid bodies, the traditional equal interval sampling strategy was optimized. The result shown that this method could satisfy the demand and make up for the deficiency in traditional sampling method. User can still edit and model on this 3D curve. Expected result has been achieved.

  8. Optimal auxiliary-covariate-based two-phase sampling design for semiparametric efficient estimation of a mean or mean difference, with application to clinical trials.

    PubMed

    Gilbert, Peter B; Yu, Xuesong; Rotnitzky, Andrea

    2014-03-15

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean 'importance-weighted' breadth (Y) of the T-cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Optimal Auxiliary-Covariate Based Two-Phase Sampling Design for Semiparametric Efficient Estimation of a Mean or Mean Difference, with Application to Clinical Trials

    PubMed Central

    Gilbert, Peter B.; Yu, Xuesong; Rotnitzky, Andrea

    2014-01-01

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semi-parametric efficient estimator is applied. This approach is made efficient by specifying the phase-two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. Simulations are performed to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. Proofs and R code are provided. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean “importance-weighted” breadth (Y) of the T cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y, and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24% in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y∣W] is important for realizing the efficiency gain, which is aided by an ample phase-two sample and by using a robust fitting method. PMID:24123289

  10. Sampling pig farms at the abattoir in a cross-sectional study - Evaluation of a sampling method.

    PubMed

    Birkegård, Anna Camilla; Halasa, Tariq; Toft, Nils

    2017-09-15

    A cross-sectional study design is relatively inexpensive, fast and easy to conduct when compared to other study designs. Careful planning is essential to obtaining a representative sample of the population, and the recommended approach is to use simple random sampling from an exhaustive list of units in the target population. This approach is rarely feasible in practice, and other sampling procedures must often be adopted. For example, when slaughter pigs are the target population, sampling the pigs on the slaughter line may be an alternative to on-site sampling at a list of farms. However, it is difficult to sample a large number of farms from an exact predefined list, due to the logistics and workflow of an abattoir. Therefore, it is necessary to have a systematic sampling procedure and to evaluate the obtained sample with respect to the study objective. We propose a method for 1) planning, 2) conducting, and 3) evaluating the representativeness and reproducibility of a cross-sectional study when simple random sampling is not possible. We used an example of a cross-sectional study with the aim of quantifying the association of antimicrobial resistance and antimicrobial consumption in Danish slaughter pigs. It was not possible to visit farms within the designated timeframe. Therefore, it was decided to use convenience sampling at the abattoir. Our approach was carried out in three steps: 1) planning: using data from meat inspection to plan at which abattoirs and how many farms to sample; 2) conducting: sampling was carried out at five abattoirs; 3) evaluation: representativeness was evaluated by comparing sampled and non-sampled farms, and the reproducibility of the study was assessed through simulated sampling based on meat inspection data from the period where the actual data collection was carried out. In the cross-sectional study samples were taken from 681 Danish pig farms, during five weeks from February to March 2015. The evaluation showed that the sampling procedure was reproducible with results comparable to the collected sample. However, the sampling procedure favoured sampling of large farms. Furthermore, both under-sampled and over-sampled areas were found using scan statistics. In conclusion, sampling conducted at abattoirs can provide a spatially representative sample. Hence it is a possible cost-effective alternative to simple random sampling. However, it is important to assess the properties of the resulting sample so that any potential selection bias can be addressed when reporting the findings. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Detection of West Nile virus and tick-borne encephalitis virus in birds in Slovakia, using a universal primer set.

    PubMed

    Csank, Tomáš; Bhide, Katarína; Bencúrová, Elena; Dolinská, Saskia; Drzewnioková, Petra; Major, Peter; Korytár, Ľuboš; Bocková, Eva; Bhide, Mangesh; Pistl, Juraj

    2016-06-01

    West Nile virus (WNV) is a mosquito-borne neurotropic pathogen that presents a major public health concern. Information on WNV prevalence and circulation in Slovakia is insufficient. Oral and cloacal swabs and bird brain samples were tested for flavivirus RNA by RT-PCR using newly designed generic primers. The species designation was confirmed by sequencing. WNV was detected in swab and brain samples, whereas one brain sample was positive for tick-borne encephalitis virus (TBEV). The WNV sequences clustered with lineages 1 and 2. These results confirm the circulation of WNV in birds in Slovakia and emphasize the risk of infection of humans and horses.

  12. Seasonal rationalization of river water quality sampling locations: a comparative study of the modified Sanders and multivariate statistical approaches.

    PubMed

    Varekar, Vikas; Karmakar, Subhankar; Jha, Ramakar

    2016-02-01

    The design of surface water quality sampling location is a crucial decision-making process for rationalization of monitoring network. The quantity, quality, and types of available dataset (watershed characteristics and water quality data) may affect the selection of appropriate design methodology. The modified Sanders approach and multivariate statistical techniques [particularly factor analysis (FA)/principal component analysis (PCA)] are well-accepted and widely used techniques for design of sampling locations. However, their performance may vary significantly with quantity, quality, and types of available dataset. In this paper, an attempt has been made to evaluate performance of these techniques by accounting the effect of seasonal variation, under a situation of limited water quality data but extensive watershed characteristics information, as continuous and consistent river water quality data is usually difficult to obtain, whereas watershed information may be made available through application of geospatial techniques. A case study of Kali River, Western Uttar Pradesh, India, is selected for the analysis. The monitoring was carried out at 16 sampling locations. The discrete and diffuse pollution loads at different sampling sites were estimated and accounted using modified Sanders approach, whereas the monitored physical and chemical water quality parameters were utilized as inputs for FA/PCA. The designed optimum number of sampling locations for monsoon and non-monsoon seasons by modified Sanders approach are eight and seven while that for FA/PCA are eleven and nine, respectively. Less variation in the number and locations of designed sampling sites were obtained by both techniques, which shows stability of results. A geospatial analysis has also been carried out to check the significance of designed sampling location with respect to river basin characteristics and land use of the study area. Both methods are equally efficient; however, modified Sanders approach outperforms FA/PCA when limited water quality and extensive watershed information is available. The available water quality dataset is limited and FA/PCA-based approach fails to identify monitoring locations with higher variation, as these multivariate statistical approaches are data-driven. The priority/hierarchy and number of sampling sites designed by modified Sanders approach are well justified by the land use practices and observed river basin characteristics of the study area.

  13. Integrating public perspectives in sample return planning.

    PubMed

    Race, M S; MacGregor, D G

    2000-01-01

    Planning for extraterrestrial sample returns--whether from Mars or other solar system bodies--must be done in a way that integrates planetary protection concerns with the usual mission technical and scientific considerations. Understanding and addressing legitimate societal concerns about the possible risks of sample return will be a critical part of the public decision making process ahead. This paper presents the results of two studies, one with lay audiences, the other with expert microbiologists designed to gather information on attitudes and concerns about sample return risks and planetary protection. Focus group interviews with lay subjects, using generic information about Mars sample return and a preliminary environmental impact assessment, were designed to obtain an indication of how the factual content is perceived and understood by the public. A research survey of microbiologists gathered information on experts' views and attitudes about sample return, risk management approaches and space exploration risks. These findings, combined with earlier research results on risk perception, will be useful in identifying levels of concern and potential conflicts in understanding between experts and the public about sample return risks. The information will be helpful in guiding development of the environmental impact statement and also has applicability to proposals for sample return from other solar system bodies where scientific uncertainty about extraterrestrial life may persist at the time of mission planning. c2001 COSPAR Published by Elsevier Science Ltd. All rights reserved.

  14. Planetary Sample Caching System Design Options

    NASA Technical Reports Server (NTRS)

    Collins, Curtis; Younse, Paulo; Backes, Paul

    2009-01-01

    Potential Mars Sample Return missions would aspire to collect small core and regolith samples using a rover with a sample acquisition tool and sample caching system. Samples would need to be stored in individual sealed tubes in a canister that could be transfered to a Mars ascent vehicle and returned to Earth. A sample handling, encapsulation and containerization system (SHEC) has been developed as part of an integrated system for acquiring and storing core samples for application to future potential MSR and other potential sample return missions. Requirements and design options for the SHEC system were studied and a recommended design concept developed. Two families of solutions were explored: 1)transfer of a raw sample from the tool to the SHEC subsystem and 2)transfer of a tube containing the sample to the SHEC subsystem. The recommended design utilizes sample tool bit change out as the mechanism for transferring tubes to and samples in tubes from the tool. The SHEC subsystem design, called the Bit Changeout Caching(BiCC) design, is intended for operations on a MER class rover.

  15. Multi-species attributes as the condition for adaptive sampling of rare species using two-stage sequential sampling with an auxiliary variable

    USGS Publications Warehouse

    Panahbehagh, B.; Smith, D.R.; Salehi, M.M.; Hornbach, D.J.; Brown, D.J.; Chan, F.; Marinova, D.; Anderssen, R.S.

    2011-01-01

    Assessing populations of rare species is challenging because of the large effort required to locate patches of occupied habitat and achieve precise estimates of density and abundance. The presence of a rare species has been shown to be correlated with presence or abundance of more common species. Thus, ecological community richness or abundance can be used to inform sampling of rare species. Adaptive sampling designs have been developed specifically for rare and clustered populations and have been applied to a wide range of rare species. However, adaptive sampling can be logistically challenging, in part, because variation in final sample size introduces uncertainty in survey planning. Two-stage sequential sampling (TSS), a recently developed design, allows for adaptive sampling, but avoids edge units and has an upper bound on final sample size. In this paper we present an extension of two-stage sequential sampling that incorporates an auxiliary variable (TSSAV), such as community attributes, as the condition for adaptive sampling. We develop a set of simulations to approximate sampling of endangered freshwater mussels to evaluate the performance of the TSSAV design. The performance measures that we are interested in are efficiency and probability of sampling a unit occupied by the rare species. Efficiency measures the precision of population estimate from the TSSAV design relative to a standard design, such as simple random sampling (SRS). The simulations indicate that the density and distribution of the auxiliary population is the most important determinant of the performance of the TSSAV design. Of the design factors, such as sample size, the fraction of the primary units sampled was most important. For the best scenarios, the odds of sampling the rare species was approximately 1.5 times higher for TSSAV compared to SRS and efficiency was as high as 2 (i.e., variance from TSSAV was half that of SRS). We have found that design performance, especially for adaptive designs, is often case-specific. Efficiency of adaptive designs is especially sensitive to spatial distribution. We recommend that simulations tailored to the application of interest are highly useful for evaluating designs in preparation for sampling rare and clustered populations.

  16. Custom Array Comparative Genomic Hybridization: the Importance of DNA Quality, an Expert Eye, and Variant Validation

    PubMed Central

    Lantieri, Francesca; Malacarne, Michela; Gimelli, Stefania; Santamaria, Giuseppe; Coviello, Domenico; Ceccherini, Isabella

    2017-01-01

    The presence of false positive and false negative results in the Array Comparative Genomic Hybridization (aCGH) design is poorly addressed in literature reports. We took advantage of a custom aCGH recently carried out to analyze its design performance, the use of several Agilent aberrations detection algorithms, and the presence of false results. Our study provides a confirmation that the high density design does not generate more noise than standard designs and, might reach a good resolution. We noticed a not negligible presence of false negative and false positive results in the imbalances call performed by the Agilent software. The Aberration Detection Method 2 (ADM-2) algorithm with a threshold of 6 performed quite well, and the array design proved to be reliable, provided that some additional filters are applied, such as considering only intervals with average absolute log2ratio above 0.3. We also propose an additional filter that takes into account the proportion of probes with log2ratio exceeding suggestive values for gain or loss. In addition, the quality of samples was confirmed to be a crucial parameter. Finally, this work raises the importance of evaluating the samples profiles by eye and the necessity of validating the imbalances detected. PMID:28287439

  17. Addressing the "Replication Crisis": Using Original Studies to Design Replication Studies with Appropriate Statistical Power.

    PubMed

    Anderson, Samantha F; Maxwell, Scott E

    2017-01-01

    Psychology is undergoing a replication crisis. The discussion surrounding this crisis has centered on mistrust of previous findings. Researchers planning replication studies often use the original study sample effect size as the basis for sample size planning. However, this strategy ignores uncertainty and publication bias in estimated effect sizes, resulting in overly optimistic calculations. A psychologist who intends to obtain power of .80 in the replication study, and performs calculations accordingly, may have an actual power lower than .80. We performed simulations to reveal the magnitude of the difference between actual and intended power based on common sample size planning strategies and assessed the performance of methods that aim to correct for effect size uncertainty and/or bias. Our results imply that even if original studies reflect actual phenomena and were conducted in the absence of questionable research practices, popular approaches to designing replication studies may result in a low success rate, especially if the original study is underpowered. Methods correcting for bias and/or uncertainty generally had higher actual power, but were not a panacea for an underpowered original study. Thus, it becomes imperative that 1) original studies are adequately powered and 2) replication studies are designed with methods that are more likely to yield the intended level of power.

  18. Texas Adolescent Tobacco and Marketing Surveillance System’s Design

    PubMed Central

    Pérez, Adriana; Harrell, Melissa B.; Malkani, Raja I.; Jackson, Christian D.; Delk, Joanne; Allotey, Prince A.; Matthews, Krystin J.; Martinez, Pablo; Perry, Cheryl L.

    2017-01-01

    Objectives To provide a full methodological description of the design of the wave I and II (6-month follow-up) surveys of the Texas Adolescent Tobacco and Marketing Surveillance System (TATAMS), a longitudinal surveillance study of 6th, 8th, and 10th grade students who attended schools in Bexar, Dallas, Tarrant, Harris, or Travis counties, where the 4 largest cities in Texas (San Antonio, Dallas, Fort Worth, Houston, and Austin, respectively) are located. Methods TATAMS used a complex probability design, yielding representative estimates of these students in these counties during the 2014–2015 academic year. Weighted prevalence of the use of tobacco products, drugs and alcohol in wave I, and the percent of: (i) bias, (ii) relative bias, and (iii) relative bias ratio, between waves I and II are estimated. Results The wave I sample included 79 schools and 3,907 students. The prevalence of current cigarette, e-cigarette and hookah use at wave I was 3.5%, 7.4%, and 2.5%, respectively. Small biases, mostly less than 3.5%, were observed for nonrespondents in wave II. Conclusions Even with adaptions to the sampling methodology, the resulting sample adequately represents the target population. Results from TATAMS will have important implications for future tobacco policy in Texas and federal regulation. PMID:29098172

  19. Sequential Designs Based on Bayesian Uncertainty Quantification in Sparse Representation Surrogate Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.

    A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less

  20. Remote sensing-aided systems for snow qualification, evapotranspiration estimation, and their application in hydrologic models

    NASA Technical Reports Server (NTRS)

    Korram, S.

    1977-01-01

    The design of general remote sensing-aided methodologies was studied to provide the estimates of several important inputs to water yield forecast models. These input parameters are snow area extent, snow water content, and evapotranspiration. The study area is Feather River Watershed (780,000 hectares), Northern California. The general approach involved a stepwise sequence of identification of the required information, sample design, measurement/estimation, and evaluation of results. All the relevent and available information types needed in the estimation process are being defined. These include Landsat, meteorological satellite, and aircraft imagery, topographic and geologic data, ground truth data, and climatic data from ground stations. A cost-effective multistage sampling approach was employed in quantification of all the required parameters. The physical and statistical models for both snow quantification and evapotranspiration estimation was developed. These models use the information obtained by aerial and ground data through appropriate statistical sampling design.

  1. Sequential Designs Based on Bayesian Uncertainty Quantification in Sparse Representation Surrogate Modeling

    DOE PAGES

    Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.

    2017-04-12

    A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less

  2. Longitudinal design considerations to optimize power to detect variances and covariances among rates of change: Simulation results based on actual longitudinal studies

    PubMed Central

    Rast, Philippe; Hofer, Scott M.

    2014-01-01

    We investigated the power to detect variances and covariances in rates of change in the context of existing longitudinal studies using linear bivariate growth curve models. Power was estimated by means of Monte Carlo simulations. Our findings show that typical longitudinal study designs have substantial power to detect both variances and covariances among rates of change in a variety of cognitive, physical functioning, and mental health outcomes. We performed simulations to investigate the interplay among number and spacing of occasions, total duration of the study, effect size, and error variance on power and required sample size. The relation between growth rate reliability (GRR) and effect size to the sample size required to detect power ≥ .80 was non-linear, with rapidly decreasing sample sizes needed as GRR increases. The results presented here stand in contrast to previous simulation results and recommendations (Hertzog, Lindenberger, Ghisletta, & von Oertzen, 2006; Hertzog, von Oertzen, Ghisletta, & Lindenberger, 2008; von Oertzen, Ghisletta, & Lindenberger, 2010), which are limited due to confounds between study length and number of waves, error variance with GCR, and parameter values which are largely out of bounds of actual study values. Power to detect change is generally low in the early phases (i.e. first years) of longitudinal studies but can substantially increase if the design is optimized. We recommend additional assessments, including embedded intensive measurement designs, to improve power in the early phases of long-term longitudinal studies. PMID:24219544

  3. Adaptive sampling in behavioral surveys.

    PubMed

    Thompson, S K

    1997-01-01

    Studies of populations such as drug users encounter difficulties because the members of the populations are rare, hidden, or hard to reach. Conventionally designed large-scale surveys detect relatively few members of the populations so that estimates of population characteristics have high uncertainty. Ethnographic studies, on the other hand, reach suitable numbers of individuals only through the use of link-tracing, chain referral, or snowball sampling procedures that often leave the investigators unable to make inferences from their sample to the hidden population as a whole. In adaptive sampling, the procedure for selecting people or other units to be in the sample depends on variables of interest observed during the survey, so the design adapts to the population as encountered. For example, when self-reported drug use is found among members of the sample, sampling effort may be increased in nearby areas. Types of adaptive sampling designs include ordinary sequential sampling, adaptive allocation in stratified sampling, adaptive cluster sampling, and optimal model-based designs. Graph sampling refers to situations with nodes (for example, people) connected by edges (such as social links or geographic proximity). An initial sample of nodes or edges is selected and edges are subsequently followed to bring other nodes into the sample. Graph sampling designs include network sampling, snowball sampling, link-tracing, chain referral, and adaptive cluster sampling. A graph sampling design is adaptive if the decision to include linked nodes depends on variables of interest observed on nodes already in the sample. Adjustment methods for nonsampling errors such as imperfect detection of drug users in the sample apply to adaptive as well as conventional designs.

  4. Design and basic properties of ternary gypsum-based mortars

    NASA Astrophysics Data System (ADS)

    Doleželová, M.; Vimmrová, A.

    2017-10-01

    Ternary mortars, prepared from gypsum, hydrated lime and three types of pozzolan were designed and tested. As a pozzolan admixture crushed ceramic, silica fume and granulated blast slag were used. The amount of pozzolans in the mixtures was determined according to molar weight of amorphous SiO2 in the material. The samples were stored under the water. The basic physical properties and mechanical properties were measured. The properties were compared with the properties of material without pozzolan. The best results in the water environment were achieved by the samples with silica fume.

  5. 40 CFR 1065.1105 - Sampling system design.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Sampling system design. 1065.1105... Compounds § 1065.1105 Sampling system design. (a) General. We recommend that you design your SVOC batch... practical, adjust sampling times based on the emission rate of target analytes from the engine to obtain...

  6. Design of a multi-spectral imager built using the compressive sensing single-pixel camera architecture

    NASA Astrophysics Data System (ADS)

    McMackin, Lenore; Herman, Matthew A.; Weston, Tyler

    2016-02-01

    We present the design of a multi-spectral imager built using the architecture of the single-pixel camera. The architecture is enabled by the novel sampling theory of compressive sensing implemented optically using the Texas Instruments DLP™ micro-mirror array. The array not only implements spatial modulation necessary for compressive imaging but also provides unique diffractive spectral features that result in a multi-spectral, high-spatial resolution imager design. The new camera design provides multi-spectral imagery in a wavelength range that extends from the visible to the shortwave infrared without reduction in spatial resolution. In addition to the compressive imaging spectrometer design, we present a diffractive model of the architecture that allows us to predict a variety of detailed functional spatial and spectral design features. We present modeling results, architectural design and experimental results that prove the concept.

  7. Miniaturized sample preparation needle: a versatile design for the rapid analysis of smoking-related compounds in hair and air samples.

    PubMed

    Saito, Yoshihiro; Ueta, Ikuo; Ogawa, Mitsuhiro; Hayashida, Makiko; Jinno, Kiyokatsu

    2007-05-09

    Miniaturized needle extraction device has been developed as a versatile sample preparation device designed for the rapid and simple analysis of smoking-related compounds in smokers' hair samples and environmental tobacco smoke. Packed with polymeric particle, the resulting particle-packed needle was employed as a miniaturized sample preparation device for the analysis of typical volatile organic compounds in tobacco smoke. Introducing a bundle of polymer-coated filaments as the extraction medium, the needle was further applied as a novel sample preparation device containing simultaneous derivatization/extraction process of volatile aldehydes. Formaldehyde (FA) and acetaldehyde (AA) in smoker's breath during the smoking were successfully derivatized with two derivatization reagents in the polymer-coated fiber-packed needle device followed by the separation and determination in gas chromatography (GC). Smokers' hair samples were also packed into the needle, allowing the direct extraction of nicotine from the hair sample in a conventional GC injector. Optimizing the main experimental parameters for each technique, successful determination of several smoking-related compounds with these needle extraction methods has been demonstrated.

  8. Unbiased Estimates of Variance Components with Bootstrap Procedures

    ERIC Educational Resources Information Center

    Brennan, Robert L.

    2007-01-01

    This article provides general procedures for obtaining unbiased estimates of variance components for any random-model balanced design under any bootstrap sampling plan, with the focus on designs of the type typically used in generalizability theory. The results reported here are particularly helpful when the bootstrap is used to estimate standard…

  9. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    USGS Publications Warehouse

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  10. An efficient adaptive sampling strategy for global surrogate modeling with applications in multiphase flow simulation

    NASA Astrophysics Data System (ADS)

    Mo, S.; Lu, D.; Shi, X.; Zhang, G.; Ye, M.; Wu, J.

    2016-12-01

    Surrogate models have shown remarkable computational efficiency in hydrological simulations involving design space exploration, sensitivity analysis, uncertainty quantification, etc. The central task of constructing a global surrogate models is to achieve a prescribed approximation accuracy with as few original model executions as possible, which requires a good design strategy to optimize the distribution of data points in the parameter domains and an effective stopping criterion to automatically terminate the design process when desired approximation accuracy is achieved. This study proposes a novel adaptive sampling strategy, which starts from a small number of initial samples and adaptively selects additional samples by balancing the collection in unexplored regions and refinement in interesting areas. We define an efficient and effective evaluation metric basing on Taylor expansion to select the most promising potential samples from candidate points, and propose a robust stopping criterion basing on the approximation accuracy at new points to guarantee the achievement of desired accuracy. The numerical results of several benchmark analytical functions indicate that the proposed approach is more computationally efficient and robust than the widely used maximin distance design and two other well-known adaptive sampling strategies. The application to two complicated multiphase flow problems further demonstrates the efficiency and effectiveness of our method in constructing global surrogate models for high-dimensional and highly nonlinear problems. Acknowledgements: This work was financially supported by the National Nature Science Foundation of China grants No. 41030746 and 41172206.

  11. Analytical Design of Evolvable Software for High-Assurance Computing

    DTIC Science & Technology

    2001-02-14

    Mathematical expression for the Total Sum of Squares which measures the variability that results when all values are treated as a combined sample coming from...primarily interested in background on software design and high-assurance computing, research in software architecture generation or evaluation...respectively. Those readers solely interested in the validation of a software design approach should at the minimum read Chapter 6 followed by Chapter

  12. Design and calibration of a vacuum compatible scanning tunneling microscope

    NASA Technical Reports Server (NTRS)

    Abel, Phillip B.

    1990-01-01

    A vacuum compatible scanning tunneling microscope was designed and built, capable of imaging solid surfaces with atomic resolution. The single piezoelectric tube design is compact, and makes use of sample mounting stubs standard to a commercially available surface analysis system. Image collection and display is computer controlled, allowing storage of images for further analysis. Calibration results from atomic scale images are presented.

  13. National accident sampling system sample design, phases 2 and 3 : executive summary

    DOT National Transportation Integrated Search

    1979-11-01

    This report describes the Phase 2 and 3 sample design for the : National Accident Sampling System (NASS). It recommends a procedure : for the first-stage selection of Primary Sampling Units (PSU's) and : the second-stage design for the selection of a...

  14. Results of a prototype surface water network design for pesticides developed for the San Joaquin River Basin, California

    USGS Publications Warehouse

    Domagalski, Joseph L.

    1997-01-01

    A nested surface water monitoring network was designed and tested to measure variability in pesticide concentrations in the San Joaquin River and selected tributaries during the irrigation season. The network design an d sampling frequency necessary for determining the variability and distribution in pesticide concentrations were tested in a prototype study. The San Joaquin River Basin, California, was sampled from April to August 1992, a period during the irrigation season where there was no rainfall. Orestimba Creek, which drains a part of the western San Joaquin Valley, was sampled three times per week for 6 weeks, followed by a once per week sampling for 6 weeks, and the three times per week sampling for 6 weeks. A site on the San Joaquin River near the mouth of the basin, and an irrigation drain of the eastern San Joaquin Valley, were sampled weekly during the entire sampling period. Pesticides were most often detected in samples collected from Orestimba Creek. This suggests that the western valley was the principal source of pesticides to the San Joaquin River during the irrigation season. Irrigation drainage water was the source of pesticides to Orestimba Creek. Pesticide concentrations of Orestimba Creek showed greater temporal variability when sampled three times per week than when sampled once a week, due to variations in field management and irrigation. The implication for the San Joaquin River basin (an irrigation-dominated agricultural setting) is that frequent sampling of tributary sites is necessary to describe the variability in pesticides transported to the San Joaquin River.

  15. Sampling design for an integrated socioeconomic and ecological survey by using satellite remote sensing and ordination

    PubMed Central

    Binford, Michael W.; Lee, Tae Jeong; Townsend, Robert M.

    2004-01-01

    Environmental variability is an important risk factor in rural agricultural communities. Testing models requires empirical sampling that generates data that are representative in both economic and ecological domains. Detrended correspondence analysis of satellite remote sensing data were used to design an effective low-cost sampling protocol for a field study to create an integrated socioeconomic and ecological database when no prior information on ecology of the survey area existed. We stratified the sample for the selection of tambons from various preselected provinces in Thailand based on factor analysis of spectral land-cover classes derived from satellite data. We conducted the survey for the sampled villages in the chosen tambons. The resulting data capture interesting variations in soil productivity and in the timing of good and bad years, which a purely random sample would likely have missed. Thus, this database will allow tests of hypotheses concerning the effect of credit on productivity, the sharing of idiosyncratic risks, and the economic influence of environmental variability. PMID:15254298

  16. Spatial Variation in Soil Properties among North American Ecosystems and Guidelines for Sampling Designs

    PubMed Central

    Loescher, Henry; Ayres, Edward; Duffy, Paul; Luo, Hongyan; Brunke, Max

    2014-01-01

    Soils are highly variable at many spatial scales, which makes designing studies to accurately estimate the mean value of soil properties across space challenging. The spatial correlation structure is critical to develop robust sampling strategies (e.g., sample size and sample spacing). Current guidelines for designing studies recommend conducting preliminary investigation(s) to characterize this structure, but are rarely followed and sampling designs are often defined by logistics rather than quantitative considerations. The spatial variability of soils was assessed across ∼1 ha at 60 sites. Sites were chosen to represent key US ecosystems as part of a scaling strategy deployed by the National Ecological Observatory Network. We measured soil temperature (Ts) and water content (SWC) because these properties mediate biological/biogeochemical processes below- and above-ground, and quantified spatial variability using semivariograms to estimate spatial correlation. We developed quantitative guidelines to inform sample size and sample spacing for future soil studies, e.g., 20 samples were sufficient to measure Ts to within 10% of the mean with 90% confidence at every temperate and sub-tropical site during the growing season, whereas an order of magnitude more samples were needed to meet this accuracy at some high-latitude sites. SWC was significantly more variable than Ts at most sites, resulting in at least 10× more SWC samples needed to meet the same accuracy requirement. Previous studies investigated the relationship between the mean and variability (i.e., sill) of SWC across space at individual sites across time and have often (but not always) observed the variance or standard deviation peaking at intermediate values of SWC and decreasing at low and high SWC. Finally, we quantified how far apart samples must be spaced to be statistically independent. Semivariance structures from 10 of the 12-dominant soil orders across the US were estimated, advancing our continental-scale understanding of soil behavior. PMID:24465377

  17. Rethinking non-inferiority: a practical trial design for optimising treatment duration.

    PubMed

    Quartagno, Matteo; Walker, A Sarah; Carpenter, James R; Phillips, Patrick Pj; Parmar, Mahesh Kb

    2018-06-01

    Background Trials to identify the minimal effective treatment duration are needed in different therapeutic areas, including bacterial infections, tuberculosis and hepatitis C. However, standard non-inferiority designs have several limitations, including arbitrariness of non-inferiority margins, choice of research arms and very large sample sizes. Methods We recast the problem of finding an appropriate non-inferior treatment duration in terms of modelling the entire duration-response curve within a pre-specified range. We propose a multi-arm randomised trial design, allocating patients to different treatment durations. We use fractional polynomials and spline-based methods to flexibly model the duration-response curve. We call this a 'Durations design'. We compare different methods in terms of a scaled version of the area between true and estimated prediction curves. We evaluate sensitivity to key design parameters, including sample size, number and position of arms. Results A total sample size of ~ 500 patients divided into a moderate number of equidistant arms (5-7) is sufficient to estimate the duration-response curve within a 5% error margin in 95% of the simulations. Fractional polynomials provide similar or better results than spline-based methods in most scenarios. Conclusion Our proposed practical randomised trial 'Durations design' shows promising performance in the estimation of the duration-response curve; subject to a pending careful investigation of its inferential properties, it provides a potential alternative to standard non-inferiority designs, avoiding many of their limitations, and yet being fairly robust to different possible duration-response curves. The trial outcome is the whole duration-response curve, which may be used by clinicians and policymakers to make informed decisions, facilitating a move away from a forced binary hypothesis testing paradigm.

  18. Report of Operation FITZWILLIAM. Volume 1, Design of Operation and Summary of Results (REDACTED)

    DTIC Science & Technology

    1948-01-01

    storage tanks {400 lbs/sq in) to pel’mit the collection o£ sampleD or gas in the vicinity of the radio- active clouc1. P~dicective LUlC.lysis of -~he gas...Corps Furnish ground dust sampling units and wrap-around countera. 4. Navy Naval :Research Lab. (a) FW:-nish ground dust sampling units...direct as necessar,r the collection or air• craft filters and gaseou.s samples trca aircraft based at Km.falem. (6) Vector Destro,.er-M:lne...Swee

  19. Statistical inference for extended or shortened phase II studies based on Simon's two-stage designs.

    PubMed

    Zhao, Junjun; Yu, Menggang; Feng, Xi-Ping

    2015-06-07

    Simon's two-stage designs are popular choices for conducting phase II clinical trials, especially in the oncology trials to reduce the number of patients placed on ineffective experimental therapies. Recently Koyama and Chen (2008) discussed how to conduct proper inference for such studies because they found that inference procedures used with Simon's designs almost always ignore the actual sampling plan used. In particular, they proposed an inference method for studies when the actual second stage sample sizes differ from planned ones. We consider an alternative inference method based on likelihood ratio. In particular, we order permissible sample paths under Simon's two-stage designs using their corresponding conditional likelihood. In this way, we can calculate p-values using the common definition: the probability of obtaining a test statistic value at least as extreme as that observed under the null hypothesis. In addition to providing inference for a couple of scenarios where Koyama and Chen's method can be difficult to apply, the resulting estimate based on our method appears to have certain advantage in terms of inference properties in many numerical simulations. It generally led to smaller biases and narrower confidence intervals while maintaining similar coverages. We also illustrated the two methods in a real data setting. Inference procedures used with Simon's designs almost always ignore the actual sampling plan. Reported P-values, point estimates and confidence intervals for the response rate are not usually adjusted for the design's adaptiveness. Proper statistical inference procedures should be used.

  20. A universal reference sample derived from clone vector for improved detection of differential gene expression

    PubMed Central

    Khan, Rishi L; Gonye, Gregory E; Gao, Guang; Schwaber, James S

    2006-01-01

    Background Using microarrays by co-hybridizing two samples labeled with different dyes enables differential gene expression measurements and comparisons across slides while controlling for within-slide variability. Typically one dye produces weaker signal intensities than the other often causing signals to be undetectable. In addition, undetectable spots represent a large problem for two-color microarray designs and most arrays contain at least 40% undetectable spots even when labeled with reference samples such as Stratagene's Universal Reference RNAs™. Results We introduce a novel universal reference sample that produces strong signal for all spots on the array, increasing the average fraction of detectable spots to 97%. Maximizing detectable spots on the reference image channel also decreases the variability of microarray data allowing for reliable detection of smaller differential gene expression changes. The reference sample is derived from sequence contained in the parental EST clone vector pT7T3D-Pac and is called vector RNA (vRNA). We show that vRNA can also be used for quality control of microarray printing and PCR product quality, detection of hybridization anomalies, and simplification of spot finding and segmentation tasks. This reference sample can be made inexpensively in large quantities as a renewable resource that is consistent across experiments. Conclusion Results of this study show that vRNA provides a useful universal reference that yields high signal for almost all spots on a microarray, reduces variation and allows for comparisons between experiments and laboratories. Further, it can be used for quality control of microarray printing and PCR product quality, detection of hybridization anomalies, and simplification of spot finding and segmentation tasks. This type of reference allows for detection of small changes in differential expression while reference designs in general allow for large-scale multivariate experimental designs. vRNA in combination with reference designs enable systems biology microarray experiments of small physiologically relevant changes. PMID:16677381

  1. Sampling optimization for high-speed weigh-in-motion measurements using in-pavement strain-based sensors

    NASA Astrophysics Data System (ADS)

    Zhang, Zhiming; Huang, Ying; Bridgelall, Raj; Palek, Leonard; Strommen, Robert

    2015-06-01

    Weigh-in-motion (WIM) measurement has been widely used for weight enforcement, pavement design, freight management, and intelligent transportation systems to monitor traffic in real-time. However, to use such sensors effectively, vehicles must exit the traffic stream and slow down to match their current capabilities. Hence, agencies need devices with higher vehicle passing speed capabilities to enable continuous weight measurements at mainline speeds. The current practices for data acquisition at such high speeds are fragmented. Deployment configurations and settings depend mainly on the experiences of operation engineers. To assure adequate data, most practitioners use very high frequency measurements that result in redundant samples, thereby diminishing the potential for real-time processing. The larger data memory requirements from higher sample rates also increase storage and processing costs. The field lacks a sampling design or standard to guide appropriate data acquisition of high-speed WIM measurements. This study develops the appropriate sample rate requirements as a function of the vehicle speed. Simulations and field experiments validate the methods developed. The results will serve as guidelines for future high-speed WIM measurements using in-pavement strain-based sensors.

  2. Design and testing of coring bits on drilling lunar rock simulant

    NASA Astrophysics Data System (ADS)

    Li, Peng; Jiang, Shengyuan; Tang, Dewei; Xu, Bo; Ma, Chao; Zhang, Hui; Qin, Hongwei; Deng, Zongquan

    2017-02-01

    Coring bits are widely utilized in the sampling of celestial bodies, and their drilling behaviors directly affect the sampling results and drilling security. This paper introduces a lunar regolith coring bit (LRCB), which is a key component of sampling tools for lunar rock breaking during the lunar soil sampling process. We establish the interaction model between the drill bit and rock at a small cutting depth, and the two main influential parameters (forward and outward rake angles) of LRCB on drilling loads are determined. We perform the parameter screening task of LRCB with the aim to minimize the weight on bit (WOB). We verify the drilling load performances of LRCB after optimization, and the higher penetrations per revolution (PPR) are, the larger drilling loads we gained. Besides, we perform lunar soil drilling simulations to estimate the efficiency on chip conveying and sample coring of LRCB. The results of the simulation and test are basically consistent on coring efficiency, and the chip removal efficiency of LRCB is slightly lower than HIT-H bit from simulation. This work proposes a method for the design of coring bits in subsequent extraterrestrial explorations.

  3. Estimating uncertainty in respondent-driven sampling using a tree bootstrap method.

    PubMed

    Baraff, Aaron J; McCormick, Tyler H; Raftery, Adrian E

    2016-12-20

    Respondent-driven sampling (RDS) is a network-based form of chain-referral sampling used to estimate attributes of populations that are difficult to access using standard survey tools. Although it has grown quickly in popularity since its introduction, the statistical properties of RDS estimates remain elusive. In particular, the sampling variability of these estimates has been shown to be much higher than previously acknowledged, and even methods designed to account for RDS result in misleadingly narrow confidence intervals. In this paper, we introduce a tree bootstrap method for estimating uncertainty in RDS estimates based on resampling recruitment trees. We use simulations from known social networks to show that the tree bootstrap method not only outperforms existing methods but also captures the high variability of RDS, even in extreme cases with high design effects. We also apply the method to data from injecting drug users in Ukraine. Unlike other methods, the tree bootstrap depends only on the structure of the sampled recruitment trees, not on the attributes being measured on the respondents, so correlations between attributes can be estimated as well as variability. Our results suggest that it is possible to accurately assess the high level of uncertainty inherent in RDS.

  4. Grouping methods for estimating the prevalences of rare traits from complex survey data that preserve confidentiality of respondents.

    PubMed

    Hyun, Noorie; Gastwirth, Joseph L; Graubard, Barry I

    2018-03-26

    Originally, 2-stage group testing was developed for efficiently screening individuals for a disease. In response to the HIV/AIDS epidemic, 1-stage group testing was adopted for estimating prevalences of a single or multiple traits from testing groups of size q, so individuals were not tested. This paper extends the methodology of 1-stage group testing to surveys with sample weighted complex multistage-cluster designs. Sample weighted-generalized estimating equations are used to estimate the prevalences of categorical traits while accounting for the error rates inherent in the tests. Two difficulties arise when using group testing in complex samples: (1) How does one weight the results of the test on each group as the sample weights will differ among observations in the same group. Furthermore, if the sample weights are related to positivity of the diagnostic test, then group-level weighting is needed to reduce bias in the prevalence estimation; (2) How does one form groups that will allow accurate estimation of the standard errors of prevalence estimates under multistage-cluster sampling allowing for intracluster correlation of the test results. We study 5 different grouping methods to address the weighting and cluster sampling aspects of complex designed samples. Finite sample properties of the estimators of prevalences, variances, and confidence interval coverage for these grouping methods are studied using simulations. National Health and Nutrition Examination Survey data are used to illustrate the methods. Copyright © 2018 John Wiley & Sons, Ltd.

  5. 30 CFR 70.208 - Bimonthly sampling; designated areas.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Bimonthly sampling; designated areas. 70.208... SAFETY AND HEALTH MANDATORY HEALTH STANDARDS-UNDERGROUND COAL MINES Sampling Procedures § 70.208 Bimonthly sampling; designated areas. (a) Each operator shall take one valid respirable dust sample from...

  6. 30 CFR 70.208 - Bimonthly sampling; designated areas.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Bimonthly sampling; designated areas. 70.208... SAFETY AND HEALTH MANDATORY HEALTH STANDARDS-UNDERGROUND COAL MINES Sampling Procedures § 70.208 Bimonthly sampling; designated areas. (a) Each operator shall take one valid respirable dust sample from...

  7. On the importance of incorporating sampling weights in ...

    EPA Pesticide Factsheets

    Occupancy models are used extensively to assess wildlife-habitat associations and to predict species distributions across large geographic regions. Occupancy models were developed as a tool to properly account for imperfect detection of a species. Current guidelines on survey design requirements for occupancy models focus on the number of sample units and the pattern of revisits to a sample unit within a season. We focus on the sampling design or how the sample units are selected in geographic space (e.g., stratified, simple random, unequal probability, etc). In a probability design, each sample unit has a sample weight which quantifies the number of sample units it represents in the finite (oftentimes areal) sampling frame. We demonstrate the importance of including sampling weights in occupancy model estimation when the design is not a simple random sample or equal probability design. We assume a finite areal sampling frame as proposed for a national bat monitoring program. We compare several unequal and equal probability designs and varying sampling intensity within a simulation study. We found the traditional single season occupancy model produced biased estimates of occupancy and lower confidence interval coverage rates compared to occupancy models that accounted for the sampling design. We also discuss how our findings inform the analyses proposed for the nascent North American Bat Monitoring Program and other collaborative synthesis efforts that propose h

  8. Optimizing methods and dodging pitfalls in microbiome research.

    PubMed

    Kim, Dorothy; Hofstaedter, Casey E; Zhao, Chunyu; Mattei, Lisa; Tanes, Ceylan; Clarke, Erik; Lauder, Abigail; Sherrill-Mix, Scott; Chehoud, Christel; Kelsen, Judith; Conrad, Máire; Collman, Ronald G; Baldassano, Robert; Bushman, Frederic D; Bittinger, Kyle

    2017-05-05

    Research on the human microbiome has yielded numerous insights into health and disease, but also has resulted in a wealth of experimental artifacts. Here, we present suggestions for optimizing experimental design and avoiding known pitfalls, organized in the typical order in which studies are carried out. We first review best practices in experimental design and introduce common confounders such as age, diet, antibiotic use, pet ownership, longitudinal instability, and microbial sharing during cohousing in animal studies. Typically, samples will need to be stored, so we provide data on best practices for several sample types. We then discuss design and analysis of positive and negative controls, which should always be run with experimental samples. We introduce a convenient set of non-biological DNA sequences that can be useful as positive controls for high-volume analysis. Careful analysis of negative and positive controls is particularly important in studies of samples with low microbial biomass, where contamination can comprise most or all of a sample. Lastly, we summarize approaches to enhancing experimental robustness by careful control of multiple comparisons and to comparing discovery and validation cohorts. We hope the experimental tactics summarized here will help researchers in this exciting field advance their studies efficiently while avoiding errors.

  9. A new apparatus design for high temperature (up to 950 °C) quasi-elastic neutron scattering in a controlled gaseous environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Al-Wahish, Amal; Armitage, D.; Hill, B.

    A design for a sample cell system suitable for high temperature Quasi-Elastic Neutron Scattering (QENS) experiments is presented. The apparatus was developed at the Spallation Neutron Source in Oak Ridge National Lab where it is currently in use. The design provides a special sample cell environment under controlled humid or dry gas flow over a wide range of temperature up to 950 °C. Using such a cell, chemical, dynamical, and physical changes can be studied in situ under various operating conditions. While the cell combined with portable automated gas environment system is especially useful for in situ studies of microscopic dynamicsmore » under operational conditions that are similar to those of solid oxide fuel cells, it can additionally be used to study a wide variety of materials, such as high temperature proton conductors. The cell can also be used in many different neutron experiments when a suitable sample holder material is selected. The sample cell system has recently been used to reveal fast dynamic processes in quasi-elastic neutron scattering experiments, which standard probes (such as electrochemical impedance spectroscopy) could not detect. In this work, we outline the design of the sample cell system and present results demonstrating its abilities in high temperature QENS experiments.« less

  10. A new apparatus design for high temperature (up to 950°C) quasi-elastic neutron scattering in a controlled gaseous environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    al-Wahish, Amal; Armitage, D.; al-Binni, U.

    Our design for a sample cell system suitable for high temperature Quasi-Elastic Neutron Scattering (QENS) experiments is presented. The apparatus was developed at the Spallation Neutron Source in Oak Ridge National Lab where it is currently in use. The design provides a special sample cell environment under controlled humid or dry gas flow over a wide range of temperature up to 950°C. Using such a cell, chemical, dynamical, and physical changes can be studied in situ under various operating conditions. And while the cell combined with portable automated gas environment system is especially useful for in situ studies of microscopicmore » dynamics under operational conditions that are similar to those of solid oxide fuel cells, it can additionally be used to study a wide variety of materials, such as high temperature protonconductors. The cell can also be used in many different neutron experiments when a suitable sample holder material is selected. Finally, the sample cell system has recently been used to reveal fast dynamic processes in quasi-elastic neutron scattering experiments, which standard probes (such as electrochemical impedance spectroscopy) could not detect. In this work, we outline the design of the sample cell system and present results demonstrating its abilities in high temperature QENS experiments.« less

  11. Low-Pressure Testing of the Mars Science Laboratory’s Solid Sampling System: Test Methods and Preliminary Results

    NASA Astrophysics Data System (ADS)

    Mukherjee, S.; von der Heydt, M.; Hanson, C.; Jandura, L.

    2009-12-01

    The Mars Science Laboratory mission is scheduled to launch in 2011 with an extensive suite of in situ science instruments. Acquiring, processing and delivering appropriate samples of rock and martian regolith to the instruments is a critical component in realizing the science capability of these payload elements. However, there are a number of challenges in validating the design of these systems. In particular, differences in the environment (atmospheric pressure and composition, temperature, gravity), target materials (variation in rock and soil properties), and state of the hardware (electrical potential, particulate coatings) may effect sampling performance. To better understand the end-to-end system and allow development of mitigation strategies if necessary, early testing of high-fidelity engineering models of the hardware in the solid sample chain is being conducted. The components of the sample acquisition, processing & delivery chain that will be tested are the drill, scoop, sieves, portioners, and instrument inlet funnels. An evaluation of the environmental parameter space was conducted to identify a subset that may have significant effects on sampling performance and cannot be well bounded by analysis. Accordingly, support equipment to enable testing at Mars surface pressures (5-10 Torr), with carbon dioxide was designed and built. A description of the testing set-up, investigations, and preliminary results will be presented.

  12. Sampling Designs in Qualitative Research: Making the Sampling Process More Public

    ERIC Educational Resources Information Center

    Onwuegbuzie, Anthony J.; Leech, Nancy L.

    2007-01-01

    The purpose of this paper is to provide a typology of sampling designs for qualitative researchers. We introduce the following sampling strategies: (a) parallel sampling designs, which represent a body of sampling strategies that facilitate credible comparisons of two or more different subgroups that are extracted from the same levels of study;…

  13. TEMPUS: A facility for containerless electromagnetic processing onboard spacelab

    NASA Technical Reports Server (NTRS)

    Lenski, H.; Willnecker, R.

    1990-01-01

    The electromagnetic containerless processing facility TEMPUS was recently assigned for a flight on the IML-2 mission. In comparison to the TEMPUS facility already flown on a sounding rocket, several improvements had to be implemented. These are in particular related to: safety; resource management; and the possibility to process different samples with different requirements in one mission. The basic design of this facility as well as the expected processing capabilities are presented. Two operational aspects turned out to strongly influence the facility design: control of the sample motion (first experimental results indicate that crew or ground interaction will be necessary to minimize residual sample motions during processing); and exchange of RF-coils (during processing in vacuum, evaporated sample materials will condense at the cold surface and may force a coil exchange, when a critical thickness is exceeded).

  14. Comparative analysis between saliva and buccal swabs as source of DNA: lesson from HLA-B*57:01 testing.

    PubMed

    Cascella, Raffaella; Stocchi, Laura; Strafella, Claudia; Mezzaroma, Ivano; Mannazzu, Marco; Vullo, Vincenzo; Montella, Francesco; Parruti, Giustino; Borgiani, Paola; Sangiuolo, Federica; Novelli, Giuseppe; Pirazzoli, Antonella; Zampatti, Stefania; Giardina, Emiliano

    2015-01-01

    Our work aimed to designate the optimal DNA source for pharmacogenetic assays, such as the screening for HLA-B*57:01 allele. A saliva and four buccal swab samples were taken from 104 patients. All the samples were stored at different time and temperature conditions and then genotyped for the HLA-B*57:01 allele by SSP-PCR and classical/capillary electrophoresis. The genotyping analysis reported different performance rates depending on the storage conditions of the samples. Given our results, the buccal swab demonstrated to be more resistant and stable in time with respect to the saliva. Our investigation designates the buccal swab as the optimal DNA source for pharmacogenetic assays in terms of resistance, low infectivity, low-invasiveness and easy sampling, and safe transport in centralized medical centers providing specialized pharmacogenetic tests.

  15. Planning and setting objectives in field studies: Chapter 2

    USGS Publications Warehouse

    Fisher, Robert N.; Dodd, C. Kenneth

    2016-01-01

    This chapter enumerates the steps required in designing and planning field studies on the ecology and conservation of reptiles, as these involve a high level of uncertainty and risk. To this end, the chapter differentiates between goals (descriptions of what one intends to accomplish) and objectives (the measurable steps required to achieve the established goals). Thus, meeting a specific goal may require many objectives. It may not be possible to define some of them until certain experiments have been conducted; often evaluations of sampling protocols are needed to increase certainty in the biological results. And if sampling locations are fixed and sampling events are repeated over time, then both study-specific covariates and sampling-specific covariates should exist. Additionally, other critical design considerations for field study include obtaining permits, as well as researching ethics and biosecurity issues.

  16. Analysis of pre-service physics teacher skills designing simple physics experiments based technology

    NASA Astrophysics Data System (ADS)

    Susilawati; Huda, C.; Kurniawan, W.; Masturi; Khoiri, N.

    2018-03-01

    Pre-service physics teacher skill in designing simple experiment set is very important in adding understanding of student concept and practicing scientific skill in laboratory. This study describes the skills of physics students in designing simple experiments based technologicall. The experimental design stages include simple tool design and sensor modification. The research method used is descriptive method with the number of research samples 25 students and 5 variations of simple physics experimental design. Based on the results of interviews and observations obtained the results of pre-service physics teacher skill analysis in designing simple experimental physics charged technology is good. Based on observation result, pre-service physics teacher skill in designing simple experiment is good while modification and sensor application are still not good. This suggests that pre-service physics teacher still need a lot of practice and do experiments in designing physics experiments using sensor modifications. Based on the interview result, it is found that students have high enough motivation to perform laboratory activities actively and students have high curiosity to be skilled at making simple practicum tool for physics experiment.

  17. A Typology of Mixed Methods Sampling Designs in Social Science Research

    ERIC Educational Resources Information Center

    Onwuegbuzie, Anthony J.; Collins, Kathleen M. T.

    2007-01-01

    This paper provides a framework for developing sampling designs in mixed methods research. First, we present sampling schemes that have been associated with quantitative and qualitative research. Second, we discuss sample size considerations and provide sample size recommendations for each of the major research designs for quantitative and…

  18. An Outcomes-Based Assessment of Quality of Life in Social Services

    ERIC Educational Resources Information Center

    Gomez, Laura Elisabet; Arias, Benito; Verdugo, Miguel Angel; Navas, Patricia

    2012-01-01

    The goal of this article consists of describing the calibration of an instrument to assess quality of life-related personal outcomes using Rasch analysis. The sample was composed of 3.029 recipients of social services from Catalonia (Spain) and was selected using a probabilistic polietapic sample design. Results related to unidimensionality, item…

  19. NEON terrestrial field observations: designing continental scale, standardized sampling

    Treesearch

    R. H. Kao; C.M. Gibson; R. E. Gallery; C. L. Meier; D. T. Barnett; K. M. Docherty; K. K. Blevins; P. D. Travers; E. Azuaje; Y. P. Springer; K. M. Thibault; V. J. McKenzie; M. Keller; L. F. Alves; E. L. S. Hinckley; J. Parnell; D. Schimel

    2012-01-01

    Rapid changes in climate and land use and the resulting shifts in species distributions and ecosystem functions have motivated the development of the National Ecological Observatory Network (NEON). Integrating across spatial scales from ground sampling to remote sensing, NEON will provide data for users to address ecological responses to changes in climate, land use,...

  20. Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power

    PubMed Central

    Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon

    2016-01-01

    An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%–155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%–71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power. PMID:28479943

  1. Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power.

    PubMed

    Miciak, Jeremy; Taylor, W Pat; Stuebing, Karla K; Fletcher, Jack M; Vaughn, Sharon

    2016-01-01

    An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%-155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%-71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power.

  2. Time-dependent classification accuracy curve under marker-dependent sampling.

    PubMed

    Zhu, Zhaoyin; Wang, Xiaofei; Saha-Chaudhuri, Paramita; Kosinski, Andrzej S; George, Stephen L

    2016-07-01

    Evaluating the classification accuracy of a candidate biomarker signaling the onset of disease or disease status is essential for medical decision making. A good biomarker would accurately identify the patients who are likely to progress or die at a particular time in the future or who are in urgent need for active treatments. To assess the performance of a candidate biomarker, the receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC) are commonly used. In many cases, the standard simple random sampling (SRS) design used for biomarker validation studies is costly and inefficient. In order to improve the efficiency and reduce the cost of biomarker validation, marker-dependent sampling (MDS) may be used. In a MDS design, the selection of patients to assess true survival time is dependent on the result of a biomarker assay. In this article, we introduce a nonparametric estimator for time-dependent AUC under a MDS design. The consistency and the asymptotic normality of the proposed estimator is established. Simulation shows the unbiasedness of the proposed estimator and a significant efficiency gain of the MDS design over the SRS design. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Development of a Novel Self-Enclosed Sample Preparation Device for DNA/RNA Isolation in Space

    NASA Technical Reports Server (NTRS)

    Zhang, Ye; Mehta, Satish K.; Pensinger, Stuart J.; Pickering, Karen D.

    2011-01-01

    Modern biology techniques present potentials for a wide range of molecular, cellular, and biochemistry applications in space, including detection of infectious pathogens and environmental contaminations, monitoring of drug-resistant microbial and dangerous mutations, identification of new phenotypes of microbial and new life species. However, one of the major technological blockades in enabling these technologies in space is a lack of devices for sample preparation in the space environment. To overcome such an obstacle, we constructed a prototype of a DNA/RNA isolation device based on our novel designs documented in the NASA New Technology Reporting System (MSC-24811-1/3-1). This device is self-enclosed and pipette free, purposely designed for use in the absence of gravity. Our design can also be modified easily for preparing samples in space for other applications, such as flowcytometry, immunostaining, cell separation, sample purification and separation according to its size and charges, sample chemical labeling, and sample purification. The prototype of our DNA/RNA isolation device was tested for efficiencies of DNA and RNA isolation from various cell types for PCR analysis. The purity and integrity of purified DNA and RNA were determined as well. Results showed that our developed DNA/RNA isolation device offers similar efficiency and quality in comparison to the samples prepared using the standard protocol in the laboratory.

  4. Novel Design for Centrifugal Countercurrent Chromatography: II. Studies on Novel Geometries of Zigzag Toroidal Tubing

    PubMed Central

    Yang, Yi; Aisa, Haji Akber; Ito, Yoichiro

    2009-01-01

    The toroidal column using a zigzag pattern has been improved in both retention of the stationary phase and peak resolution. To further improve the retention of stationary phase and peak resolution, a series of novel geometric designs of tubing (plain, mid-clamping, flattened and flat-twisted tubing) was evaluated their performance in CCC. The results showed that the tubing which was flattened vertically against centrifugal force (vert-flattened tubing) produced the best peak resolution among them. Using vert-flattened tubing a series of experiments was performed to study the effects of column capacity and sample size. The results indicated that a 0.25 ml capacity column is ideal for analysis of small amount samples. PMID:20454530

  5. Two-stage sequential sampling: A neighborhood-free adaptive sampling procedure

    USGS Publications Warehouse

    Salehi, M.; Smith, D.R.

    2005-01-01

    Designing an efficient sampling scheme for a rare and clustered population is a challenging area of research. Adaptive cluster sampling, which has been shown to be viable for such a population, is based on sampling a neighborhood of units around a unit that meets a specified condition. However, the edge units produced by sampling neighborhoods have proven to limit the efficiency and applicability of adaptive cluster sampling. We propose a sampling design that is adaptive in the sense that the final sample depends on observed values, but it avoids the use of neighborhoods and the sampling of edge units. Unbiased estimators of population total and its variance are derived using Murthy's estimator. The modified two-stage sampling design is easy to implement and can be applied to a wider range of populations than adaptive cluster sampling. We evaluate the proposed sampling design by simulating sampling of two real biological populations and an artificial population for which the variable of interest took the value either 0 or 1 (e.g., indicating presence and absence of a rare event). We show that the proposed sampling design is more efficient than conventional sampling in nearly all cases. The approach used to derive estimators (Murthy's estimator) opens the door for unbiased estimators to be found for similar sequential sampling designs. ?? 2005 American Statistical Association and the International Biometric Society.

  6. 23 CFR Appendix A to Part 1340 - Sample Design

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 23 Highways 1 2011-04-01 2011-04-01 false Sample Design A Appendix A to Part 1340 Highways... OBSERVATIONAL SURVEYS OF SEAT BELT USE Pt. 1340, App. A Appendix A to Part 1340—Sample Design Following is a description of a sample design that meets the final survey guidelines and, based upon NHTSA's experience in...

  7. Design and development of novel sensors for the determination of fluoride in water.

    PubMed

    Pillai, Aji Balan; Varghese, Benjamin; Madhusoodanan, Kottarathil Naduvil

    2012-01-03

    The presence of high fluoride content in drinking water is a serious health hazard as it may lead to fluorosis, a serious bone disease. Taking into account of the importance of fluoride an attempt has been made to design and develop simple, low cost, and easy to use sensors for the in situ determination of fluoride in water. Two novel absorption sensors have been fabricated and their characterization done. The first one is a light emitting diode based sensor and the other one is an evanescent wave fiber optic sensor. Reagents prepared using standard methods are mixed with water sample containing fluoride ion, and the peak absorption wavelength is found out. Suitable light sources and photo detectors have been selected, and the sensors are designed to give accurate results over a wide range. A microcontroller based setup has been fabricated for recording the concentration of the measured sample in parts per billion. Both sensors have been used to analyze water samples collected from various sources and regions. The results obtained have been compared with those obtained by using a spectrophotometer used for fluoride measurement and found to have one to one correspondence.

  8. Simultaneous derivatization/preconcentration of volatile aldehydes with a miniaturized fiber-packed sample preparation device designed for gas chromatographic analysis.

    PubMed

    Saito, Yoshihiro; Ueta, Ikuo; Ogawa, Mitsuhiro; Jinno, Kiyokatsu

    2006-10-01

    A novel in-needle sample preparation device has been developed for the determination of volatile aldehydes in gaseous samples. The needle device is designed for the gas chromatographic (GC) analysis of aldehydes and ketones commonly found in typical in-house environments. In order to prepare the extraction device, a bundle of polymer-coated filaments was longitudinally packed into a specially designed needle. Derivatization reactions were prompted by 2,4-dinitrophenylhydrazine (NDPH) included in the needle, and so the aldehydes and ketones were derivatized to the corresponding hydrazones and extracted with the extraction needle. A reproducible extraction needle preparation process was established, along with a repeatable derivatization/extraction process that ensures the successful determination of aldehydes. The storage performance of the extraction needle was also evaluated at room temperature for three days. The results demonstrate the successful application of the fiber-packed extraction device to the preparation of a gaseous sample of aldehydes, and the future possibility of applying the extraction device to the analysis of in-house environments.

  9. Rapid evaluation of high-performance systems

    NASA Astrophysics Data System (ADS)

    Forbes, G. W.; Ruoff, J.

    2017-11-01

    System assessment for design often involves averages, such as rms wavefront error, that are estimated by ray tracing through a sample of points within the pupil. Novel general-purpose sampling and weighting schemes are presented and it is also shown that optical design can benefit from tailored versions of these schemes. It turns out that the type of Gaussian quadrature that has long been recognized for efficiency in this domain requires about 40-50% more ray tracing to attain comparable accuracy to generic versions of the new schemes. Even greater efficiency gains can be won, however, by tailoring such sampling schemes to the optical context where azimuthal variation in the wavefront is generally weaker than the radial variation. These new schemes are special cases of what is known in the mathematical world as cubature. Our initial results also led to the consideration of simpler sampling configurations that approximate the newfound cubature schemes. We report on the practical application of a selection of such schemes and make observations that aid in the discovery of novel cubature schemes relevant to optical design of systems with circular pupils.

  10. A minimax technique for time-domain design of preset digital equalizers using linear programming

    NASA Technical Reports Server (NTRS)

    Vaughn, G. L.; Houts, R. C.

    1975-01-01

    A linear programming technique is presented for the design of a preset finite-impulse response (FIR) digital filter to equalize the intersymbol interference (ISI) present in a baseband channel with known impulse response. A minimax technique is used which minimizes the maximum absolute error between the actual received waveform and a specified raised-cosine waveform. Transversal and frequency-sampling FIR digital filters are compared as to the accuracy of the approximation, the resultant ISI and the transmitted energy required. The transversal designs typically have slightly better waveform accuracy for a given distortion; however, the frequency-sampling equalizer uses fewer multipliers and requires less transmitted energy. A restricted transversal design is shown to use the least number of multipliers at the cost of a significant increase in energy and loss of waveform accuracy at the receiver.

  11. Sampling considerations for disease surveillance in wildlife populations

    USGS Publications Warehouse

    Nusser, S.M.; Clark, W.R.; Otis, D.L.; Huang, L.

    2008-01-01

    Disease surveillance in wildlife populations involves detecting the presence of a disease, characterizing its prevalence and spread, and subsequent monitoring. A probability sample of animals selected from the population and corresponding estimators of disease prevalence and detection provide estimates with quantifiable statistical properties, but this approach is rarely used. Although wildlife scientists often assume probability sampling and random disease distributions to calculate sample sizes, convenience samples (i.e., samples of readily available animals) are typically used, and disease distributions are rarely random. We demonstrate how landscape-based simulation can be used to explore properties of estimators from convenience samples in relation to probability samples. We used simulation methods to model what is known about the habitat preferences of the wildlife population, the disease distribution, and the potential biases of the convenience-sample approach. Using chronic wasting disease in free-ranging deer (Odocoileus virginianus) as a simple illustration, we show that using probability sample designs with appropriate estimators provides unbiased surveillance parameter estimates but that the selection bias and coverage errors associated with convenience samples can lead to biased and misleading results. We also suggest practical alternatives to convenience samples that mix probability and convenience sampling. For example, a sample of land areas can be selected using a probability design that oversamples areas with larger animal populations, followed by harvesting of individual animals within sampled areas using a convenience sampling method.

  12. Test and evaluation of the Argonne BPAC10 Series air chamber calorimeter designed for 20 minute measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perry, R.B.; Fiarman, S.; Jung, E.A.

    1990-10-01

    This paper is the final report on DOE-OSS Task ANLE88002 Fast Air Chamber Calorimetry.'' The task objective was to design, construct, and test an isothermal air chamber calorimeter for plutonium assay of bulk samples that would meet the following requirements for sample power measurement: average sample measurement time less than 20 minutes. Measurement of samples with power output up to 10 W. Precision of better than 1% RSD for sample power greater than 1 W. Precision better than 0.010 watt SD, for sample power less than 1 W. This report gives a description of the calorimeter hardware and software andmore » discusses the test results. The instrument operating procedure, included as an appendix, gives examples of typical input/output and explains the menu driven software. Sample measurement time of less than 20 minutes was attained by pre-equilibration of the samples in low cost precision preheaters and by prediction of equilibrium measurements. Tests at the TA55 Plutonium Facility at Los Alamos National Laboratory, on typical samples, indicates that the instrument meets all the measurement requirements.« less

  13. Performance Evaluation of Particle Sampling Probes for Emission Measurements of Aircraft Jet Engines

    NASA Technical Reports Server (NTRS)

    Lee, Poshin; Chen, Da-Ren; Sanders, Terry (Technical Monitor)

    2001-01-01

    Considerable attention has been recently received on the impact of aircraft-produced aerosols upon the global climate. Sampling particles directly from jet engines has been performed by different research groups in the U.S. and Europe. However, a large variation has been observed among published data on the conversion efficiency and emission indexes of jet engines. The variation results surely from the differences in test engine types, engine operation conditions, and environmental conditions. The other factor that could result in the observed variation is the performance of sampling probes used. Unfortunately, it is often neglected in the jet engine community. Particle losses during the sampling, transport, and dilution processes are often not discussed/considered in literatures. To address this issue, we evaluated the performance of one sampling probe by challenging it with monodisperse particles. A significant performance difference was observed on the sampling probe evaluated under different temperature conditions. Thermophoretic effect, nonisokinetic sampling and turbulence loss contribute to the loss of particles in sampling probes. The results of this study show that particle loss can be dramatic if the sampling probe is not well designed. Further, the result allows ones to recover the actual size distributions emitted from jet engines.

  14. The Role of Contexts and Teacher's Questioning to Enhance Students' Thinking

    ERIC Educational Resources Information Center

    Widjaja, Wanty; Dolk, Maarten; Fauzan, Ahmad

    2010-01-01

    This paper discusses results from a design research in line with Realistic Mathematics Education (RME). Daily cycles of design, classroom experiments, and retrospective analysis are enacted in five days of working about division by fractions. Data consists of episodes of video classroom discussions, and samples of students' work. The focus of…

  15. Recent progress in the design and clinical development of electronic-nose technologies

    Treesearch

    Dan Wilson

    2016-01-01

    Electronic-nose (e-nose) devices are instruments designed to detect and discriminate between precise complex gaseous mixtures of volatile organic compounds derived from specific organic sources, such as clinical test samples from patients, based on electronic aroma signature patterns (distinct digital sensor responses) resulting from the combined outputs of a...

  16. Operation of a sampling train for the analysis of environmental species in coal gasification gas-phase process streams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pochan, M.J.; Massey, M.J.

    1979-02-01

    This report discusses the results of actual raw product gas sampling efforts and includes: Rationale for raw product gas sampling efforts; design and operation of the CMU gas sampling train; development and analysis of a sampling train data base; and conclusions and future application of results. The results of sampling activities at the CO/sub 2/-Acceptor and Hygas pilot plants proved that: The CMU gas sampling train is a valid instrument for characterization of environmental parameters in coal gasification gas-phase process streams; depending on the particular process configuration, the CMU gas sampling train can reduce gasifier effluent characterization activity to amore » single location in the raw product gas line; and in contrast to the slower operation of the EPA SASS Train, CMU's gas sampling train can collect representative effluent data at a rapid rate (approx. 2 points per hour) consistent with the rate of change of process variables, and thus function as a tool for process engineering-oriented analysis of environmental characteristics.« less

  17. Design of gefitinib-loaded poly (l-lactic acid) microspheres via a supercritical anti-solvent process for dry powder inhalation.

    PubMed

    Lin, Qing; Liu, Guijin; Zhao, Ziyi; Wei, Dongwei; Pang, Jiafeng; Jiang, Yanbin

    2017-10-30

    To develop a safer, more stable and potent formulation of gefitinib (GFB), micro-spheres of GFB encapsulated into poly (l-lactic acid) (PLLA) have been prepared by supercritical anti-solvent (SAS) technology in this study. Operating factors were optimized using a selected OA 16 (4 5 ) orthogonal array design, and the properties of the raw material and SAS processed samples were characterized by different methods The results show that the GFB-loaded PLLA particles prepared were spherical, having a smaller and narrower particle size compared with raw GFB. The optimal GFB-loaded PLLA sample was prepared with less aggregation, highest GFB loading (15.82%) and smaller size (D 50 =2.48μm, which meets the size of dry powder inhalers). The results of XRD and DSC indicate that GFB is encapsulated into PLLA matrix in a polymorphic form different from raw GFB. FT-IR results show that the chemical structure of GFB does not change after the SAS process. The results of in vitro release show that the optimal sample release was slower compared with raw GFB particles. Moreover, the results of in vitro anti-cancer trials show that the optimal sample had a higher cytotoxicity than raw GFB. After blending with sieved lactose, the flowability and aerosolization performance of the optimal sample for DPI were improved, with angle of repose, emitted dose and fine particles fractions from 38.4° to 23°, 63.21% to >90%, 23.37% to >30%, respectively. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Rationale and design of the HOME trial: A pragmatic randomized controlled trial of home-based human papillomavirus (HPV) self-sampling for increasing cervical cancer screening uptake and effectiveness in a U.S. healthcare system.

    PubMed

    Winer, Rachel L; Tiro, Jasmin A; Miglioretti, Diana L; Thayer, Chris; Beatty, Tara; Lin, John; Gao, Hongyuan; Kimbel, Kilian; Buist, Diana S M

    2018-01-01

    Women who delay or do not attend Papanicolaou (Pap) screening are at increased risk for cervical cancer. Trials in countries with organized screening programs have demonstrated that mailing high-risk (hr) human papillomavirus (HPV) self-sampling kits to under-screened women increases participation, but U.S. data are lacking. HOME is a pragmatic randomized controlled trial set within a U.S. integrated healthcare delivery system to compare two programmatic approaches for increasing cervical cancer screening uptake and effectiveness in under-screened women (≥3.4years since last Pap) aged 30-64years: 1) usual care (annual patient reminders and ad hoc outreach by clinics) and 2) usual care plus mailed hrHPV self-screening kits. Over 2.5years, eligible women were identified through electronic medical record (EMR) data and randomized 1:1 to the intervention or control arm. Women in the intervention arm were mailed kits with pre-paid envelopes to return samples to the central clinical laboratory for hrHPV testing. Results were documented in the EMR to notify women's primary care providers of appropriate follow-up. Primary outcomes are detection and treatment of cervical neoplasia. Secondary outcomes are cervical cancer screening uptake, abnormal screening results, and women's experiences and attitudes towards hrHPV self-sampling and follow-up of hrHPV-positive results (measured through surveys and interviews). The trial was designed to evaluate whether a programmatic strategy incorporating hrHPV self-sampling is effective in promoting adherence to the complete screening process (including follow-up of abnormal screening results and treatment). The objective of this report is to describe the rationale and design of this pragmatic trial. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Adaptive web sampling.

    PubMed

    Thompson, Steven K

    2006-12-01

    A flexible class of adaptive sampling designs is introduced for sampling in network and spatial settings. In the designs, selections are made sequentially with a mixture distribution based on an active set that changes as the sampling progresses, using network or spatial relationships as well as sample values. The new designs have certain advantages compared with previously existing adaptive and link-tracing designs, including control over sample sizes and of the proportion of effort allocated to adaptive selections. Efficient inference involves averaging over sample paths consistent with the minimal sufficient statistic. A Markov chain resampling method makes the inference computationally feasible. The designs are evaluated in network and spatial settings using two empirical populations: a hidden human population at high risk for HIV/AIDS and an unevenly distributed bird population.

  20. Cryogenic Liquid Sample Acquisition System for Remote Space Applications

    NASA Technical Reports Server (NTRS)

    Mahaffy, Paul; Trainer, Melissa; Wegel, Don; Hawk, Douglas; Melek, Tony; Johnson, Christopher; Amato, Michael; Galloway, John

    2013-01-01

    There is a need to acquire autonomously cryogenic hydrocarbon liquid sample from remote planetary locations such as the lakes of Titan for instruments such as mass spectrometers. There are several problems that had to be solved relative to collecting the right amount of cryogenic liquid sample into a warmer spacecraft, such as not allowing the sample to boil off or fractionate too early; controlling the intermediate and final pressures within carefully designed volumes; designing for various particulates and viscosities; designing to thermal, mass, and power-limited spacecraft interfaces; and reducing risk. Prior art inlets for similar instruments in spaceflight were designed primarily for atmospheric gas sampling and are not useful for this front-end application. These cryogenic liquid sample acquisition system designs for remote space applications allow for remote, autonomous, controlled sample collections of a range of challenging cryogenic sample types. The design can control the size of the sample, prevent fractionation, control pressures at various stages, and allow for various liquid sample levels. It is capable of collecting repeated samples autonomously in difficult lowtemperature conditions often found in planetary missions. It is capable of collecting samples for use by instruments from difficult sample types such as cryogenic hydrocarbon (methane, ethane, and propane) mixtures with solid particulates such as found on Titan. The design with a warm actuated valve is compatible with various spacecraft thermal and structural interfaces. The design uses controlled volumes, heaters, inlet and vent tubes, a cryogenic valve seat, inlet screens, temperature and cryogenic liquid sensors, seals, and vents to accomplish its task.

  1. Experimental design and efficient parameter estimation in preclinical pharmacokinetic studies.

    PubMed

    Ette, E I; Howie, C A; Kelman, A W; Whiting, B

    1995-05-01

    Monte Carlo simulation technique used to evaluate the effect of the arrangement of concentrations on the efficiency of estimation of population pharmacokinetic parameters in the preclinical setting is described. Although the simulations were restricted to the one compartment model with intravenous bolus input, they provide the basis of discussing some structural aspects involved in designing a destructive ("quantic") preclinical population pharmacokinetic study with a fixed sample size as is usually the case in such studies. The efficiency of parameter estimation obtained with sampling strategies based on the three and four time point designs were evaluated in terms of the percent prediction error, design number, individual and joint confidence intervals coverage for parameter estimates approaches, and correlation analysis. The data sets contained random terms for both inter- and residual intra-animal variability. The results showed that the typical population parameter estimates for clearance and volume were efficiently (accurately and precisely) estimated for both designs, while interanimal variability (the only random effect parameter that could be estimated) was inefficiently (inaccurately and imprecisely) estimated with most sampling schedules of the two designs. The exact location of the third and fourth time point for the three and four time point designs, respectively, was not critical to the efficiency of overall estimation of all population parameters of the model. However, some individual population pharmacokinetic parameters were sensitive to the location of these times.

  2. Robust Design of Sheet Metal Forming Process Based on Kriging Metamodel

    NASA Astrophysics Data System (ADS)

    Xie, Yanmin

    2011-08-01

    Nowadays, sheet metal forming processes design is not a trivial task due to the complex issues to be taken into account (conflicting design goals, complex shapes forming and so on). Optimization methods have also been widely applied in sheet metal forming. Therefore, proper design methods to reduce time and costs have to be developed mostly based on computer aided procedures. At the same time, the existence of variations during manufacturing processes significantly may influence final product quality, rendering non-robust optimal solutions. In this paper, a small size of design of experiments is conducted to investigate how a stochastic behavior of noise factors affects drawing quality. The finite element software (LS_DYNA) is used to simulate the complex sheet metal stamping processes. The Kriging metamodel is adopted to map the relation between input process parameters and part quality. Robust design models for sheet metal forming process integrate adaptive importance sampling with Kriging model, in order to minimize impact of the variations and achieve reliable process parameters. In the adaptive sample, an improved criterion is used to provide direction in which additional training samples can be added to better the Kriging model. Nonlinear functions as test functions and a square stamping example (NUMISHEET'93) are employed to verify the proposed method. Final results indicate application feasibility of the aforesaid method proposed for multi-response robust design.

  3. Novel Horn Designs for Power Ultrasonics

    NASA Technical Reports Server (NTRS)

    Sherrit, Stewart; Badescu, M.; Bao, X.; Bar-Cohen, Y.; Chang, Z.

    2004-01-01

    Ultrasonic horns are used in a variety of industrial and medical applications. At JPL a rock-sampling tool based on an ultrasonic horn was developed to drill, abrade and core rock samples including hard basalts. This device is an impact device, which uses ultrasonic vibratlons that occur at the horn tip to produce a sonic resonance with the aid of a loosely connected mass. Although standard horns are found in many current industrial designs they suffer from a few key limitations when used for USDC applications. Manufacturing a horn requires turning down stock material (e.g. Titanium) from the larger outer diameter to the horn tip diameter, and this process is both time consuming and wasteful. In this paper, we present novel horn designs that are specifically designed for impact applications as the USDC. One such design addressed the excasive length that is involved with the use of the horn limiting its applications when system dimensions are constrained. For this purpose, a folded horn design was conceived that reduces the overall length of the resonator (physical length) but maintains or increases the acoustic length. Initial experiments with horns having such P design indicate that the tip displacement can be further adjusted by phasing the bending displacements and the extensional displacements. Another conceived horn design is the 'dog bone' horn that uses an end mass on the horn tip io increase the impact efficiency of the horn. In this paper, the experimental results for these novel born designs are presented and compared to the results predicted by theory.

  4. Mechanical characterization of poly-SiGe layers for CMOS-MEMS integrated application

    NASA Astrophysics Data System (ADS)

    Modlinski, Robert; Witvrouw, Ann; Verbist, Agnes; Puers, Robert; De Wolf, Ingrid

    2010-01-01

    Measuring mechanical properties at the microscale is essential to understand and to fabricate reliable MEMS. In this paper a tensile testing system and matching microscale test samples are presented. The test samples have a dog-bone-like structure. They are designed to mimic standard macro-tensile test samples. The micro-tensile tests are used to characterize 0.9 µm thick polycrystalline silicon germanium (poly-SiGe) films. The poly-SiGe film, that can be considered as a close equivalent to polycrystalline silicon (poly-Si), is studied as a very promising material for use in CMOS/MEMS integration in a single chip due to its low-temperature LPCVD deposition (T < 450 °C). The fabrication process of the poly-SiGe micro-tensile test structure is explained in detail: the design, the processing and post-processing, the testing and finally the results' discussion. The poly-SiGe micro-tensile results are also compared with nanoindentation data obtained on the same poly-SiGe films as well as with results obtained by other research groups.

  5. Study design requirements for RNA sequencing-based breast cancer diagnostics.

    PubMed

    Mer, Arvind Singh; Klevebring, Daniel; Grönberg, Henrik; Rantalainen, Mattias

    2016-02-01

    Sequencing-based molecular characterization of tumors provides information required for individualized cancer treatment. There are well-defined molecular subtypes of breast cancer that provide improved prognostication compared to routine biomarkers. However, molecular subtyping is not yet implemented in routine breast cancer care. Clinical translation is dependent on subtype prediction models providing high sensitivity and specificity. In this study we evaluate sample size and RNA-sequencing read requirements for breast cancer subtyping to facilitate rational design of translational studies. We applied subsampling to ascertain the effect of training sample size and the number of RNA sequencing reads on classification accuracy of molecular subtype and routine biomarker prediction models (unsupervised and supervised). Subtype classification accuracy improved with increasing sample size up to N = 750 (accuracy = 0.93), although with a modest improvement beyond N = 350 (accuracy = 0.92). Prediction of routine biomarkers achieved accuracy of 0.94 (ER) and 0.92 (Her2) at N = 200. Subtype classification improved with RNA-sequencing library size up to 5 million reads. Development of molecular subtyping models for cancer diagnostics requires well-designed studies. Sample size and the number of RNA sequencing reads directly influence accuracy of molecular subtyping. Results in this study provide key information for rational design of translational studies aiming to bring sequencing-based diagnostics to the clinic.

  6. Discussion of NAEG distribution and inventory program sampling data in preparation for initiation of phase III

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brady, D.N.; Church, B.W.; White, M.G.

    Soil sampling activities during 1974 were concentrated in Area 5 of the Nevada Test Site (NTS). Area 5 has been assigned the highest priority because of the number of atmospheric test events held and a wide distribution of contaminants. Improved sampling techniques are described. Preliminary data analysis aided in designing a program to infer $sup 239-240$Pu results by Ge(Li) scanning techniques. (auth)

  7. Statistical methods for efficient design of community surveys of response to noise: Random coefficients regression models

    NASA Technical Reports Server (NTRS)

    Tomberlin, T. J.

    1985-01-01

    Research studies of residents' responses to noise consist of interviews with samples of individuals who are drawn from a number of different compact study areas. The statistical techniques developed provide a basis for those sample design decisions. These techniques are suitable for a wide range of sample survey applications. A sample may consist of a random sample of residents selected from a sample of compact study areas, or in a more complex design, of a sample of residents selected from a sample of larger areas (e.g., cities). The techniques may be applied to estimates of the effects on annoyance of noise level, numbers of noise events, the time-of-day of the events, ambient noise levels, or other factors. Methods are provided for determining, in advance, how accurately these effects can be estimated for different sample sizes and study designs. Using a simple cost function, they also provide for optimum allocation of the sample across the stages of the design for estimating these effects. These techniques are developed via a regression model in which the regression coefficients are assumed to be random, with components of variance associated with the various stages of a multi-stage sample design.

  8. Objective sampling design in a highly heterogeneous landscape - characterizing environmental determinants of malaria vector distribution in French Guiana, in the Amazonian region.

    PubMed

    Roux, Emmanuel; Gaborit, Pascal; Romaña, Christine A; Girod, Romain; Dessay, Nadine; Dusfour, Isabelle

    2013-12-01

    Sampling design is a key issue when establishing species inventories and characterizing habitats within highly heterogeneous landscapes. Sampling efforts in such environments may be constrained and many field studies only rely on subjective and/or qualitative approaches to design collection strategy. The region of Cacao, in French Guiana, provides an excellent study site to understand the presence and abundance of Anopheles mosquitoes, their species dynamics and the transmission risk of malaria across various environments. We propose an objective methodology to define a stratified sampling design. Following thorough environmental characterization, a factorial analysis of mixed groups allows the data to be reduced and non-collinear principal components to be identified while balancing the influences of the different environmental factors. Such components defined new variables which could then be used in a robust k-means clustering procedure. Then, we identified five clusters that corresponded to our sampling strata and selected sampling sites in each stratum. We validated our method by comparing the species overlap of entomological collections from selected sites and the environmental similarities of the same sites. The Morisita index was significantly correlated (Pearson linear correlation) with environmental similarity based on i) the balanced environmental variable groups considered jointly (p = 0.001) and ii) land cover/use (p-value < 0.001). The Jaccard index was significantly correlated with land cover/use-based environmental similarity (p-value = 0.001). The results validate our sampling approach. Land cover/use maps (based on high spatial resolution satellite images) were shown to be particularly useful when studying the presence, density and diversity of Anopheles mosquitoes at local scales and in very heterogeneous landscapes.

  9. Design and Weighting Methods for a Nationally Representative Sample of HIV-infected Adults Receiving Medical Care in the United States-Medical Monitoring Project

    PubMed Central

    Iachan, Ronaldo; H. Johnson, Christopher; L. Harding, Richard; Kyle, Tonja; Saavedra, Pedro; L. Frazier, Emma; Beer, Linda; L. Mattson, Christine; Skarbinski, Jacek

    2016-01-01

    Background: Health surveys of the general US population are inadequate for monitoring human immunodeficiency virus (HIV) infection because the relatively low prevalence of the disease (<0.5%) leads to small subpopulation sample sizes. Objective: To collect a nationally and locally representative probability sample of HIV-infected adults receiving medical care to monitor clinical and behavioral outcomes, supplementing the data in the National HIV Surveillance System. This paper describes the sample design and weighting methods for the Medical Monitoring Project (MMP) and provides estimates of the size and characteristics of this population. Methods: To develop a method for obtaining valid, representative estimates of the in-care population, we implemented a cross-sectional, three-stage design that sampled 23 jurisdictions, then 691 facilities, then 9,344 HIV patients receiving medical care, using probability-proportional-to-size methods. The data weighting process followed standard methods, accounting for the probabilities of selection at each stage and adjusting for nonresponse and multiplicity. Nonresponse adjustments accounted for differing response at both facility and patient levels. Multiplicity adjustments accounted for visits to more than one HIV care facility. Results: MMP used a multistage stratified probability sampling design that was approximately self-weighting in each of the 23 project areas and nationally. The probability sample represents the estimated 421,186 HIV-infected adults receiving medical care during January through April 2009. Methods were efficient (i.e., induced small, unequal weighting effects and small standard errors for a range of weighted estimates). Conclusion: The information collected through MMP allows monitoring trends in clinical and behavioral outcomes and informs resource allocation for treatment and prevention activities. PMID:27651851

  10. Design of a blood-freezing system for leukemia research

    NASA Technical Reports Server (NTRS)

    Williams, T. E.; Cygnarowicz, T. A.

    1978-01-01

    Leukemia research involves the use of cryogenic freezing and storage equipment. In a program being carried out at the National Cancer Institute (NCI), bone marrow (white blood cells) was frozen using a standard cryogenic biological freezer. With this system, it is difficult to maintain the desired rate of freezing and repeatability from sample to sample. A freezing system was developed that satisfies the requirements for a repeatable, constant freezing rate. The system was delivered to NIC and is now operational. This report describes the design of the major subsystems, the analyses, the operating procedure, and final system test results.

  11. Single-cell transcriptome conservation in cryopreserved cells and tissues.

    PubMed

    Guillaumet-Adkins, Amy; Rodríguez-Esteban, Gustavo; Mereu, Elisabetta; Mendez-Lago, Maria; Jaitin, Diego A; Villanueva, Alberto; Vidal, August; Martinez-Marti, Alex; Felip, Enriqueta; Vivancos, Ana; Keren-Shaul, Hadas; Heath, Simon; Gut, Marta; Amit, Ido; Gut, Ivo; Heyn, Holger

    2017-03-01

    A variety of single-cell RNA preparation procedures have been described. So far, protocols require fresh material, which hinders complex study designs. We describe a sample preservation method that maintains transcripts in viable single cells, allowing one to disconnect time and place of sampling from subsequent processing steps. We sequence single-cell transcriptomes from >1000 fresh and cryopreserved cells using 3'-end and full-length RNA preparation methods. Our results confirm that the conservation process did not alter transcriptional profiles. This substantially broadens the scope of applications in single-cell transcriptomics and could lead to a paradigm shift in future study designs.

  12. Using GIS to generate spatially balanced random survey designs for natural resource applications.

    PubMed

    Theobald, David M; Stevens, Don L; White, Denis; Urquhart, N Scott; Olsen, Anthony R; Norman, John B

    2007-07-01

    Sampling of a population is frequently required to understand trends and patterns in natural resource management because financial and time constraints preclude a complete census. A rigorous probability-based survey design specifies where to sample so that inferences from the sample apply to the entire population. Probability survey designs should be used in natural resource and environmental management situations because they provide the mathematical foundation for statistical inference. Development of long-term monitoring designs demand survey designs that achieve statistical rigor and are efficient but remain flexible to inevitable logistical or practical constraints during field data collection. Here we describe an approach to probability-based survey design, called the Reversed Randomized Quadrant-Recursive Raster, based on the concept of spatially balanced sampling and implemented in a geographic information system. This provides environmental managers a practical tool to generate flexible and efficient survey designs for natural resource applications. Factors commonly used to modify sampling intensity, such as categories, gradients, or accessibility, can be readily incorporated into the spatially balanced sample design.

  13. Opinions of sports clinical practice chiropractors, with sports specialty training and those without, about chiropractic research priorities in sports health care: a centering resonance analysis

    PubMed Central

    Lee, Alexander D; Szabo, Kaitlyn; McDowell, Kirstie; Granger, Sydney

    2016-01-01

    Introduction: A Canadian sports chiropractic research agenda has yet to be defined. The Delphi method can be utilized to achieve this purpose; however, the sample of experts who participate can influence the results. To better inform sample selection for future research agenda development, we set out to determine if differences in opinions about research priorities exist between chiropractors who have their sports specialty designation and those who do not. Methods: Fifteen sports clinical practice chiropractors who have their sports fellowship designation and fifteen without, were interviewed with a set of standardized questions about sports chiropractic research priorities. A centering resonance analysis and cluster analysis were conducted on the interview responses. Results: The two practitioner groups differed in their opinions about the type of research that they would like to see conducted, the research that would impact their clinical practice the most, and where they believed research was lacking. However, both groups were similar in their opinions about research collaborations. Conclusion: Sports clinical practice chiropractors, with their sports specialty designation and those without, differed in their opinions about sports chiropractic research priorities; however, they had similar opinions about research collaborations. These results suggest that it may be important to sample from both practitioner groups in future studies aimed at developing research agendas for chiropractic research in sport. PMID:28065995

  14. Study of sample drilling techniques for Mars sample return missions

    NASA Technical Reports Server (NTRS)

    Mitchell, D. C.; Harris, P. T.

    1980-01-01

    To demonstrate the feasibility of acquiring various surface samples for a Mars sample return mission the following tasks were performed: (1) design of a Mars rover-mounted drill system capable of acquiring crystalline rock cores; prediction of performance, mass, and power requirements for various size systems, and the generation of engineering drawings; (2) performance of simulated permafrost coring tests using a residual Apollo lunar surface drill, (3) design of a rock breaker system which can be used to produce small samples of rock chips from rocks which are too large to return to Earth, but too small to be cored with the Rover-mounted drill; (4)design of sample containers for the selected regolith cores, rock cores, and small particulate or rock samples; and (5) design of sample handling and transfer techniques which will be required through all phase of sample acquisition, processing, and stowage on-board the Earth return vehicle. A preliminary design of a light-weight Rover-mounted sampling scoop was also developed.

  15. Optimal sample sizes for the design of reliability studies: power consideration.

    PubMed

    Shieh, Gwowen

    2014-09-01

    Intraclass correlation coefficients are used extensively to measure the reliability or degree of resemblance among group members in multilevel research. This study concerns the problem of the necessary sample size to ensure adequate statistical power for hypothesis tests concerning the intraclass correlation coefficient in the one-way random-effects model. In view of the incomplete and problematic numerical results in the literature, the approximate sample size formula constructed from Fisher's transformation is reevaluated and compared with an exact approach across a wide range of model configurations. These comprehensive examinations showed that the Fisher transformation method is appropriate only under limited circumstances, and therefore it is not recommended as a general method in practice. For advance design planning of reliability studies, the exact sample size procedures are fully described and illustrated for various allocation and cost schemes. Corresponding computer programs are also developed to implement the suggested algorithms.

  16. Performance of some biotic indices in the real variable world: a case study at different spatial scales in North-Western Mediterranean Sea.

    PubMed

    Tataranni, Mariella; Lardicci, Claudio

    2010-01-01

    The aim of this study was to analyse the variability of four different benthic biotic indices (AMBI, BENTIX, H', M-AMBI) in two marine coastal areas of the North-Western Mediterranean Sea. In each coastal area, 36 replicates were randomly selected according to a hierarchical sampling design, which allowed estimating the variance components of the indices associated with four different spatial scales (ranging from metres to kilometres). All the analyses were performed at two different sampling periods in order to evaluate if the observed trends were consistent over the time. The variance components of the four indices revealed complex trends and different patterns in the two sampling periods. These results highlighted that independently from the employed index, a rigorous and appropriate sampling design taking into account different scales should always be used in order to avoid erroneous classifications and to develop effective monitoring programs.

  17. Design and simulation study of the immunization Data Quality Audit (DQA).

    PubMed

    Woodard, Stacy; Archer, Linda; Zell, Elizabeth; Ronveaux, Olivier; Birmingham, Maureen

    2007-08-01

    The goal of the Data Quality Audit (DQA) is to assess whether the Global Alliance for Vaccines and Immunization-funded countries are adequately reporting the number of diphtheria-tetanus-pertussis immunizations given, on which the "shares" are awarded. Given that this sampling design is a modified two-stage cluster sample (modified because a stratified, rather than a simple, random sample of health facilities is obtained from the selected clusters); the formula for the calculation of the standard error for the estimate is unknown. An approximated standard error has been proposed, and the first goal of this simulation is to assess the accuracy of the standard error. Results from the simulations based on hypothetical populations were found not to be representative of the actual DQAs that were conducted. Additional simulations were then conducted on the actual DQA data to better access the precision of the DQ with both the original and the increased sample sizes.

  18. Sampling and modeling visual component dynamics of forested areas

    Treesearch

    Victor A. Rudis

    1990-01-01

    A scaling device and sample design have been employed to assess vegetative screening of forested stands as part of an extensive forest inventory.Referenced in a poster presentation are results from East Texas pine and oak-pine stands and Alabama forested areas.Refinements for optimizing measures to distinguish differences in scenic beauty, disturbances, and stand...

  19. Are They Bloody Guilty? Blood Doping with Simulated Samples

    ERIC Educational Resources Information Center

    Stuart, Parker E.; Lees, Kelsey D.; Milanick, Mark A.

    2014-01-01

    In this practice-based lab, students are provided with four Olympic athlete profiles and simulated blood and urine samples to test for illegal substances and blood-doping practices. Throughout the course of the lab, students design and conduct a testing procedure and use their results to determine which athletes won their medals fairly. All of the…

  20. GEOSTATISTICAL SAMPLING DESIGNS FOR HAZARDOUS WASTE SITES

    EPA Science Inventory

    This chapter discusses field sampling design for environmental sites and hazardous waste sites with respect to random variable sampling theory, Gy's sampling theory, and geostatistical (kriging) sampling theory. The literature often presents these sampling methods as an adversari...

  1. Different Types of Heater-Cooler Units and Their Risk of Transmission of Mycobacterium chimaera During Open-Heart Surgery: Clues From Device Design.

    PubMed

    Kuehl, Richard; Banderet, Florian; Egli, Adrian; Keller, Peter M; Frei, Reno; Döbele, Thomas; Eckstein, Friedrich; Widmer, Andreas F

    2018-05-28

    OBJECTIVEWorldwide, Mycobacterium chimaera infections have been linked to contaminated aerosols from heater-cooler units (HCUs) during open-heart surgery. These infections have mainly been associated with the 3T HCU (LivaNova, formerly Sorin). The reasons for this and the risk of transmission from other HCUs have not been systematically assessed.DESIGNProspective observational study.SETTINGUniversity Hospital Basel, Switzerland.METHODSContinuous microbiological surveillance of 3 types of HCUs in use (3T from LivaNova/Sorin and HCU30 and HCU40 from Maquet) was initiated in June 2014, coupled with an epidemiologic workup. Monthly water and air samples were taken. Construction design was analyzed, and exhausted airflow was measured.RESULTS Mycobacterium chimaera grew in 8 of 12 water samples (66%) and 22 of 24 air samples (91%) of initial 3T HCUs in use, and in 2 of 83 water samples (2%) and 0 of 41 (0%) air samples of new replacement 3T HCUs. Moreover, 7 of 12 water samples (58%) and 0 of 4 (0%) air samples from the HCU30 were positive, and 0 of 64 (0%) water samples and 0 of 50 (0%) air samples from the HCU40 were positive. We identified 4 relevant differences in HCU design compared to the 3T: air flow direction, location of cooling ventilators, continuous cooling of the water tank at 4°C, and an electronic alarm in the HCU40 reminding the user of the next disinfection cycle.CONCLUSIONSAll infected patients were associated with a 3T HCU. The individual HCU design may explain the different risk of disseminating M. chimaera into the air of the operating room. These observations can help the construction of improved devices to ensure patient safety during cardiac surgery.Infect Control Hosp Epidemiol 2018;1-7.

  2. A performance-oriented power transformer design methodology using multi-objective evolutionary optimization

    PubMed Central

    Adly, Amr A.; Abd-El-Hafiz, Salwa K.

    2014-01-01

    Transformers are regarded as crucial components in power systems. Due to market globalization, power transformer manufacturers are facing an increasingly competitive environment that mandates the adoption of design strategies yielding better performance at lower costs. In this paper, a power transformer design methodology using multi-objective evolutionary optimization is proposed. Using this methodology, which is tailored to be target performance design-oriented, quick rough estimation of transformer design specifics may be inferred. Testing of the suggested approach revealed significant qualitative and quantitative match with measured design and performance values. Details of the proposed methodology as well as sample design results are reported in the paper. PMID:26257939

  3. A performance-oriented power transformer design methodology using multi-objective evolutionary optimization.

    PubMed

    Adly, Amr A; Abd-El-Hafiz, Salwa K

    2015-05-01

    Transformers are regarded as crucial components in power systems. Due to market globalization, power transformer manufacturers are facing an increasingly competitive environment that mandates the adoption of design strategies yielding better performance at lower costs. In this paper, a power transformer design methodology using multi-objective evolutionary optimization is proposed. Using this methodology, which is tailored to be target performance design-oriented, quick rough estimation of transformer design specifics may be inferred. Testing of the suggested approach revealed significant qualitative and quantitative match with measured design and performance values. Details of the proposed methodology as well as sample design results are reported in the paper.

  4. DESIGN NOTE: A modified Nanosurf scanning tunnelling microscope for ballistic electron emission microscopy and spectroscopy

    NASA Astrophysics Data System (ADS)

    Appelbaum, Ian; Thompson, Pete; van Schendel, P. J. A.

    2006-04-01

    We describe the design and implementation of modifications to an ambient STM with a slip stick approach mechanism to create a system capable of ballistic electron emission microscopy (BEEM) and spectroscopy (BEES). These modifications require building a custom sample holder which operates as a high gain transimpedance preamplifier. Results of microscopy and spectroscopy using a Au/n-GaAs Schottky device demonstrate the effectiveness of our design.

  5. Results of Hg speciation testing on DWPF SMECT-4, SMECT-6, and RCT-2 samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bannochie, C. J.

    2016-02-04

    The Savannah River National Laboratory (SRNL) was tasked with preparing and shipping samples for Hg speciation by Eurofins Frontier Global Sciences, Inc. in Seattle, WA on behalf of the Savannah River Remediation (SRR) Mercury Task Team.i,ii The fifteenth shipment of samples was designated to include Defense Waste Processing Facility (DWPF) Slurry Mix Evaporator Condensate Tank (SMECT) samples from Sludge Receipt and Adjustment Tank (SRAT) Batch 738 and a Recycle Condensate Tank (RCT) sample from SRAT Batch 736. The DWPF sample designations for the three samples analyzed are provided in Table 1. The Batch 738 ‘Baseline’ SMECT sample was taken priormore » to Precipitate Reactor Feed Tank (PRFT) addition and concentration and therefore, precedes the SMECT-5 sample reported previously. iii The Batch 738 ‘End of SRAT Cycle’ SMECT sample was taken at the conclusion of SRAT operations for this batch (PRFT addition/concentration, acid additions, initial concentration, MCU addition, and steam stripping). Batch 738 experienced a sludge slurry carryover event, which introduced sludge solids to the SMECT that were particularly evident in the SMECT-5 sample, but less evident in the ‘End of SRAT Cycle’ SMECT-6 sample. The Batch 736 ‘After SME’ RCT sample was taken after completion of SMECT transfers at the end of the SME cycle.« less

  6. The predictive validity of selection for entry into postgraduate training in general practice: evidence from three longitudinal studies

    PubMed Central

    Patterson, Fiona; Lievens, Filip; Kerrin, Máire; Munro, Neil; Irish, Bill

    2013-01-01

    Background The selection methodology for UK general practice is designed to accommodate several thousand applicants per year and targets six core attributes identified in a multi-method job-analysis study Aim To evaluate the predictive validity of selection methods for entry into postgraduate training, comprising a clinical problem-solving test, a situational judgement test, and a selection centre. Design and setting A three-part longitudinal predictive validity study of selection into training for UK general practice. Method In sample 1, participants were junior doctors applying for training in general practice (n = 6824). In sample 2, participants were GP registrars 1 year into training (n = 196). In sample 3, participants were GP registrars sitting the licensing examination after 3 years, at the end of training (n = 2292). The outcome measures include: assessor ratings of performance in a selection centre comprising job simulation exercises (sample 1); supervisor ratings of trainee job performance 1 year into training (sample 2); and licensing examination results, including an applied knowledge examination and a 12-station clinical skills objective structured clinical examination (OSCE; sample 3). Results Performance ratings at selection predicted subsequent supervisor ratings of job performance 1 year later. Selection results also significantly predicted performance on both the clinical skills OSCE and applied knowledge examination for licensing at the end of training. Conclusion In combination, these longitudinal findings provide good evidence of the predictive validity of the selection methods, and are the first reported for entry into postgraduate training. Results show that the best predictor of work performance and training outcomes is a combination of a clinical problem-solving test, a situational judgement test, and a selection centre. Implications for selection methods for all postgraduate specialties are considered. PMID:24267856

  7. Development and Design Application of Rigidized Surface Insulation Thermal Protection Systems, Volume 1. [for the space shuttle

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Materials and design technology of the all-silica LI-900 rigid surface insulation (RSI) thermal protection system (TPS) concept for the shuttle spacecraft is presented. All results of contract development efforts are documented. Engineering design and analysis of RSI strain arrestor plate material selections, sizing, and weight studies are reported. A shuttle prototype test panel was designed, analyzed, fabricated, and delivered. Thermophysical and mechanical properties of LI-900 were experimentally established and reported. Environmental tests, including simulations of shuttle loads represented by thermal response, turbulent duct, convective cycling, and chemical tolerance tests are described and results reported. Descriptions of material test samples and panels fabricated for testing are included. Descriptions of analytical sizing and design procedures are presented in a manner formulated to allow competent engineering organizations to perform rational design studies. Results of parametric studies involving material and system variables are reported. Material performance and design data are also delineated.

  8. Comparison of the Results of MISSE 6 Atomic Oxygen Erosion Yields of Layered Kapton H Films with Monte Carlo Computational Predictions

    NASA Technical Reports Server (NTRS)

    Banks, Bruce A.; Groh, Kim De; Kneubel, Christian A.

    2014-01-01

    A space experiment flown as part of the Materials International Space Station Experiment 6B (MISSE 6B) was designed to compare the atomic oxygen erosion yield (Ey) of layers of Kapton H polyimide with no spacers between layers with that of layers of Kapton H with spacers between layers. The results were compared to a solid Kapton H (DuPont, Wilmington, DE) sample. Monte Carlo computational modeling was performed to optimize atomic oxygen interaction parameter values to match the results of both the MISSE 6B multilayer experiment and the undercut erosion profile from a crack defect in an aluminized Kapton H sample flown on the Long Duration Exposure Facility (LDEF). The Monte Carlo modeling produced credible agreement with space results of increased Ey for all samples with spacers as well as predicting the space-observed enhancement in erosion near the edges of samples due to scattering from the beveled edges of the sample holders.

  9. Optimal experimental design for assessment of enzyme kinetics in a drug discovery screening environment.

    PubMed

    Sjögren, Erik; Nyberg, Joakim; Magnusson, Mats O; Lennernäs, Hans; Hooker, Andrew; Bredberg, Ulf

    2011-05-01

    A penalized expectation of determinant (ED)-optimal design with a discrete parameter distribution was used to find an optimal experimental design for assessment of enzyme kinetics in a screening environment. A data set for enzyme kinetic data (V(max) and K(m)) was collected from previously reported studies, and every V(max)/K(m) pair (n = 76) was taken to represent a unique drug compound. The design was restricted to 15 samples, an incubation time of up to 40 min, and starting concentrations (C(0)) for the incubation between 0.01 and 100 μM. The optimization was performed by finding the sample times and C(0) returning the lowest uncertainty (S.E.) of the model parameter estimates. Individual optimal designs, one general optimal design and one, for laboratory practice suitable, pragmatic optimal design (OD) were obtained. In addition, a standard design (STD-D), representing a commonly applied approach for metabolic stability investigations, was constructed. Simulations were performed for OD and STD-D by using the Michaelis-Menten (MM) equation, and enzyme kinetic parameters were estimated with both MM and a monoexponential decay. OD generated a better result (relative standard error) for 99% of the compounds and an equal or better result [(root mean square error (RMSE)] for 78% of the compounds in estimation of metabolic intrinsic clearance. Furthermore, high-quality estimates (RMSE < 30%) of both V(max) and K(m) could be obtained for a considerable number (26%) of the investigated compounds by using the suggested OD. The results presented in this study demonstrate that the output could generally be improved compared with that obtained from the standard approaches used today.

  10. 30 CFR 71.208 - Bimonthly sampling; designated work positions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Bimonthly sampling; designated work positions... COAL MINE SAFETY AND HEALTH MANDATORY HEALTH STANDARDS-SURFACE COAL MINES AND SURFACE WORK AREAS OF UNDERGROUND COAL MINES Sampling Procedures § 71.208 Bimonthly sampling; designated work positions. (a) Each...

  11. Results of Hg speciation testing on tank 39 and 1Q16 tank 50 samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bannochie, C. J.

    2016-03-07

    The Savannah River National Laboratory (SRNL) was tasked with preparing and shipping samples for Hg speciation by Eurofins Frontier Global Sciences, Inc. in Seattle, WA on behalf of the Savannah River Remediation (SRR) Mercury Task Team.i,ii The seventeenth shipment of samples was designated to include two Tank 39 samples and the 1Q16 Tank 50 Quarterly WAC sample. The surface Tank 39 sample was pulled at 262.1” from the tank bottom, and the depth Tank 39 sample was pulled at 95” from the tank bottom. The 1Q16 Tank 50 WAC sample was drawn from the 1-L variable depth sample received bymore » SRNL.« less

  12. A Numerical Climate Observing Network Design Study

    NASA Technical Reports Server (NTRS)

    Stammer, Detlef

    2003-01-01

    This project was concerned with three related questions of an optimal design of a climate observing system: 1. The spatial sampling characteristics required from an ARGO system. 2. The degree to which surface observations from ARGO can be used to calibrate and test satellite remote sensing observations of sea surface salinity (SSS) as it is anticipated now. 3. The more general design of an climate observing system as it is required in the near future for CLIVAR in the Atlantic. An important question in implementing an observing system is that of the sampling density required to observe climate-related variations in the ocean. For that purpose this project was concerned with the sampling requirements for the ARGO float system, but investigated also other elements of a climate observing system. As part of this project we studied the horizontal and vertical sampling characteristics of a global ARGO system which is required to make it fully complementary to altimeter data with the goal to capture climate related variations on large spatial scales (less thanAttachment: 1000 km). We addressed this question in the framework of a numerical model study in the North Atlantic with an 1/6 horizontal resolution. The advantage of a numerical design study is the knowledge of the full model state. Sampled by a synthetic float array, model results will therefore allow to test and improve existing deployment strategies with the goal to make the system as optimal and cost-efficient as possible. Attachment: "Optimal observations for variational data assimilation".

  13. Improvement program for polycarbonate capacitors. [hermetically sealed, and ac wound

    NASA Technical Reports Server (NTRS)

    Bailey, R. R.; Waterman, K. D.

    1973-01-01

    Hermetically sealed, wound, AC, polycarbonate capacitors incorporating design improvements recommended in a previous study were designed and built. A 5000 hour, 400 Hz ac life test was conducted using 384 of these capacitors to verify the adequacy of the design improvements. The improvements incorporated in the capacitors designed for this program eliminated the major cause of failure found in the preceding work, termination failure. A failure cause not present in the previous test became significant in this test with capacitors built from one lot of polycarbonate film. The samples from this lot accounted for 25 percent of the total test complement. Analyses of failed samples showed that the film had an excessive solvent content. This solvent problem was found in 37 of the total 46 failures which occurred in this test. The other nine were random failures resulting from causes such as seal leaks, foreign particles, and possibly wrinkles.

  14. Robust Optimization Design for Turbine Blade-Tip Radial Running Clearance using Hierarchically Response Surface Method

    NASA Astrophysics Data System (ADS)

    Zhiying, Chen; Ping, Zhou

    2017-11-01

    Considering the robust optimization computational precision and efficiency for complex mechanical assembly relationship like turbine blade-tip radial running clearance, a hierarchically response surface robust optimization algorithm is proposed. The distribute collaborative response surface method is used to generate assembly system level approximation model of overall parameters and blade-tip clearance, and then a set samples of design parameters and objective response mean and/or standard deviation is generated by using system approximation model and design of experiment method. Finally, a new response surface approximation model is constructed by using those samples, and this approximation model is used for robust optimization process. The analyses results demonstrate the proposed method can dramatic reduce the computational cost and ensure the computational precision. The presented research offers an effective way for the robust optimization design of turbine blade-tip radial running clearance.

  15. Design of Measurement Apparatus for Electromagnetic Shielding Effectiveness Using Flanged Double Ridged Waveguide

    NASA Astrophysics Data System (ADS)

    Kwon, Jong Hwa; Choi, Jae Ick; Yook, Jong Gwan

    In this paper, we design and manufacture a flanged double ridged waveguide with a tapered section as a sample holder for measuring the electromagnetic shielding effectiveness (SE) of planar material in broadband frequency ranges up to 10GHz. The proposed technique overcomes the limitations of the conventional ASTM D4935 test method at high frequencies. The simulation results for the designed sample holders agree well with the fabricated ones in consideration of the design specification of S11 < -20dB within the frequency range of 1-10GHz. To verify the proposed measurement apparatus, the measured SE data of the commercial shielding materials from 1 to 10GHz were indirectly compared with those obtained from the ASTM D4935 from 30MHz to 1GHz. We observed that the SE data obtained by using both experimental techniques agree with each other.

  16. Income Transfers and Maternal Health: Evidence from a National Randomized Social Cash Transfer Program in Zambia.

    PubMed

    Handa, Sudhanshu; Peterman, Amber; Seidenfeld, David; Tembo, Gelson

    2016-02-01

    There is promising recent evidence that poverty-targeted social cash transfers have potential to improve maternal health outcomes; however, questions remain surrounding design features responsible for impacts. In addition, virtually no evidence exists from the African region. This study explores the impact of Zambia's Child Grant Program on a range of maternal health utilization outcomes using a randomized design and difference-in-differences multivariate regression from data collected over 24 months from 2010 to 2012. Results indicate that while there are no measurable program impacts among the main sample, there are heterogeneous impacts on skilled attendance at birth among a sample of women residing in households having better access to maternal health services. The latter result is particularly interesting because of the overall low level of health care availability in program areas suggesting that dedicated program design or matching supply-side interventions may be necessary to leverage unconditional cash transfers in similar settings to impact maternal health. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Income transfers and maternal health: Evidence from a national randomized social cash transfer program in Zambia

    PubMed Central

    Handa, Sudhanshu; Peterman, Amber; Seidenfeld, David; Tembo, Gelson

    2017-01-01

    There is promising recent evidence that poverty-targeted social cash transfers have potential to improve maternal health outcomes, however questions remain surrounding design features responsible for impacts. In addition, virtually no evidence exists from the African region. This study explores the impact of Zambia’s Child Grant Program on a range of maternal health utilization outcomes using a randomized design and difference-in-differences multivariate regression from data collected over 24 months from 2010 to 2012. Results indicate that while there are no measurable program impacts among the main sample, there are heterogeneous impacts on skilled attendance at birth among a sample of women residing in households having better access to maternal health services. The latter result is particularly interesting because of the overall low level of healthcare availability in program areas suggesting dedicated program design or matching supply-side interventions may be necessary to leverage unconditional cash transfers in similar settings to impact maternal health. PMID:25581062

  18. Evaluating multi-level models to test occupancy state responses of Plethodontid salamanders

    USGS Publications Warehouse

    Kroll, Andrew J.; Garcia, Tiffany S.; Jones, Jay E.; Dugger, Catherine; Murden, Blake; Johnson, Josh; Peerman, Summer; Brintz, Ben; Rochelle, Michael

    2015-01-01

    Plethodontid salamanders are diverse and widely distributed taxa and play critical roles in ecosystem processes. Due to salamander use of structurally complex habitats, and because only a portion of a population is available for sampling, evaluation of sampling designs and estimators is critical to provide strong inference about Plethodontid ecology and responses to conservation and management activities. We conducted a simulation study to evaluate the effectiveness of multi-scale and hierarchical single-scale occupancy models in the context of a Before-After Control-Impact (BACI) experimental design with multiple levels of sampling. Also, we fit the hierarchical single-scale model to empirical data collected for Oregon slender and Ensatina salamanders across two years on 66 forest stands in the Cascade Range, Oregon, USA. All models were fit within a Bayesian framework. Estimator precision in both models improved with increasing numbers of primary and secondary sampling units, underscoring the potential gains accrued when adding secondary sampling units. Both models showed evidence of estimator bias at low detection probabilities and low sample sizes; this problem was particularly acute for the multi-scale model. Our results suggested that sufficient sample sizes at both the primary and secondary sampling levels could ameliorate this issue. Empirical data indicated Oregon slender salamander occupancy was associated strongly with the amount of coarse woody debris (posterior mean = 0.74; SD = 0.24); Ensatina occupancy was not associated with amount of coarse woody debris (posterior mean = -0.01; SD = 0.29). Our simulation results indicate that either model is suitable for use in an experimental study of Plethodontid salamanders provided that sample sizes are sufficiently large. However, hierarchical single-scale and multi-scale models describe different processes and estimate different parameters. As a result, we recommend careful consideration of study questions and objectives prior to sampling data and fitting models.

  19. Advanced Turbomachinery Components for Supercritical CO 2 Power Cycles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDowell, Michael

    2016-03-31

    Six indirectly heated supercritical CO 2 (SCO 2 ) Brayton cycles with turbine inlet conditions of 1300°F and 4000 psia with varying plant capacities from 10MWe to 550MWe were analyzed. 550 MWe plant capacity directly heated SCO 2 Brayton cycles with turbine inlet conditions of 2500°F and 4000 psia were also analyzed. Turbomachinery configurations and conceptual designs for both indirectly and directly heated cycles were developed. Optimum turbomachinery and generator configurations were selected and the resulting analysis provides validation that the turbomachinery conceptual designs meet efficiency performance targets. Previously identified technology gaps were updated based on these conceptual designs. Materialmore » compatibility testing was conducted for materials typically used in turbomachinery housings, turbine disks and blades. Testing was completed for samples in unstressed and stressed conditions. All samples exposed to SCO 2 showed some oxidation, the extent of which varied considerably between the alloys tested. Examination of cross sections of the stressed samples found no evidence of cracking due to SCO 2 exposure.« less

  20. Use of Complementary Medicine in Older Americans: Results from the Health and Retirement Study

    ERIC Educational Resources Information Center

    Ness, Jose; Cirillo, Dominic J.; Weir, David R.; Nisly, Nicole L.; Wallace, Robert B.

    2005-01-01

    Purpose: The correlates of complementary and alternative medicine (CAM) utilization among elders have not been fully investigated. This study was designed to identify such correlates in a large sample of older adults, thus generating new data relevant to consumer education, medical training, and health practice and policy. Design and Methods: A…

  1. ROMPS critical design review data package

    NASA Technical Reports Server (NTRS)

    Dobbs, M. E.

    1992-01-01

    The design elements of the Robot-Operated Material Processing in Space (ROMPS) system are described in outline and graphical form. The following subsystems/topics are addressed: servo system, testbed and simulation results, System V Controller, robot module, furnace module, SCL experiment supervisor and script sample processing control, battery system, watchdog timers, mechanical/thermal considerations, and fault conditions and recovery.

  2. Multiple Measures of Juvenile Drug Court Effectiveness: Results of a Quasi-Experimental Design

    ERIC Educational Resources Information Center

    Rodriguez, Nancy; Webb, Vincent J.

    2004-01-01

    Prior studies of juvenile drug courts have been constrained by small samples, inadequate comparison groups, or limited outcome measures. The authors report on a 3-year evaluation that examines the impact of juvenile drug court participation on recidivism and drug use. A quasi-experimental design is used to compare juveniles assigned to drug court…

  3. Recrystallization and damage of ice in winter sports.

    PubMed

    Seymour-Pierce, Alexandra; Lishman, Ben; Sammonds, Peter

    2017-02-13

    Ice samples, after sliding against a steel runner, show evidence of recrystallization and microcracking under the runner, as well as macroscopic cracking throughout the ice. The experiments that produced these ice samples are designed to be analogous to sliding in the winter sport of skeleton. Changes in the ice fabric are shown using thick and thin sections under both diffuse and polarized light. Ice drag is estimated as 40-50% of total energy dissipation in a skeleton run. The experimental results are compared with visual inspections of skeleton tracks, and to similar behaviour in rocks during sliding on earthquake faults. The results presented may be useful to athletes and designers of winter sports equipment.This article is part of the themed issue 'Microdynamics of ice'. © 2016 The Author(s).

  4. Efficient design of cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances.

    PubMed

    van Breukelen, Gerard J P; Candel, Math J J M

    2018-06-10

    Cluster randomized trials evaluate the effect of a treatment on persons nested within clusters, where treatment is randomly assigned to clusters. Current equations for the optimal sample size at the cluster and person level assume that the outcome variances and/or the study costs are known and homogeneous between treatment arms. This paper presents efficient yet robust designs for cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances, and compares these with 2 practical designs. First, the maximin design (MMD) is derived, which maximizes the minimum efficiency (minimizes the maximum sampling variance) of the treatment effect estimator over a range of treatment-to-control variance ratios. The MMD is then compared with the optimal design for homogeneous variances and costs (balanced design), and with that for homogeneous variances and treatment-dependent costs (cost-considered design). The results show that the balanced design is the MMD if the treatment-to control cost ratio is the same at both design levels (cluster, person) and within the range for the treatment-to-control variance ratio. It still is highly efficient and better than the cost-considered design if the cost ratio is within the range for the squared variance ratio. Outside that range, the cost-considered design is better and highly efficient, but it is not the MMD. An example shows sample size calculation for the MMD, and the computer code (SPSS and R) is provided as supplementary material. The MMD is recommended for trial planning if the study costs are treatment-dependent and homogeneity of variances cannot be assumed. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  5. Development of NASA's Sample Cartridge Assembly: Summary of GEDS Design, Development Testing, and Thermal Analyses

    NASA Technical Reports Server (NTRS)

    O'Connor, Brian; Hernandez, Deborah; Hornsby, Linda; Brown, Maria; Horton-Mullins, Kathryn

    2017-01-01

    Outline: Background of ISS (International Space Station) Material Science Research Rack; NASA SCA (Sample Cartridge Assembly) Design; GEDS (Gravitational Effects in Distortion in Sintering) Experiment Ampoule Design; Development Testing Summary; Thermal Modeling and Analysis. Summary: GEDS design development challenging (GEDS Ampoule design developed through MUGS (Microgravity) testing; Short duration transient sample processing; Unable to measure sample temperatures); MUGS Development testing used to gather data (Actual LGF (Low Gradient Furnace)-like furnace response; Provided sample for sintering evaluation); Transient thermal model integral to successful GEDS experiment (Development testing provided furnace response; PI (Performance Indicator) evaluation of sintering anchored model evaluation of processing durations; Thermal transient model used to determine flight SCA sample processing profiles).

  6. Introductory Statistics Students' Conceptual Understanding of Study Design and Conclusions

    NASA Astrophysics Data System (ADS)

    Fry, Elizabeth Brondos

    Recommended learning goals for students in introductory statistics courses include the ability to recognize and explain the key role of randomness in designing studies and in drawing conclusions from those studies involving generalizations to a population or causal claims (GAISE College Report ASA Revision Committee, 2016). The purpose of this study was to explore introductory statistics students' understanding of the distinct roles that random sampling and random assignment play in study design and the conclusions that can be made from each. A study design unit lasting two and a half weeks was designed and implemented in four sections of an undergraduate introductory statistics course based on modeling and simulation. The research question that this study attempted to answer is: How does introductory statistics students' conceptual understanding of study design and conclusions (in particular, unbiased estimation and establishing causation) change after participating in a learning intervention designed to promote conceptual change in these areas? In order to answer this research question, a forced-choice assessment called the Inferences from Design Assessment (IDEA) was developed as a pretest and posttest, along with two open-ended assignments, a group quiz and a lab assignment. Quantitative analysis of IDEA results and qualitative analysis of the group quiz and lab assignment revealed that overall, students' mastery of study design concepts significantly increased after the unit, and the great majority of students successfully made the appropriate connections between random sampling and generalization, and between random assignment and causal claims. However, a small, but noticeable portion of students continued to demonstrate misunderstandings, such as confusion between random sampling and random assignment.

  7. Moderate deviations-based importance sampling for stochastic recursive equations

    DOE PAGES

    Dupuis, Paul; Johnson, Dane

    2017-11-17

    Abstract Subsolutions to the Hamilton–Jacobi–Bellman equation associated with a moderate deviations approximation are used to design importance sampling changes of measure for stochastic recursive equations. Analogous to what has been done for large deviations subsolution-based importance sampling, these schemes are shown to be asymptotically optimal under the moderate deviations scaling. We present various implementations and numerical results to contrast their performance, and also discuss the circumstances under which a moderate deviation scaling might be appropriate.

  8. Moderate deviations-based importance sampling for stochastic recursive equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dupuis, Paul; Johnson, Dane

    Abstract Subsolutions to the Hamilton–Jacobi–Bellman equation associated with a moderate deviations approximation are used to design importance sampling changes of measure for stochastic recursive equations. Analogous to what has been done for large deviations subsolution-based importance sampling, these schemes are shown to be asymptotically optimal under the moderate deviations scaling. We present various implementations and numerical results to contrast their performance, and also discuss the circumstances under which a moderate deviation scaling might be appropriate.

  9. Automatic flow-batch system for cold vapor atomic absorption spectroscopy determination of mercury in honey from Argentina using online sample treatment.

    PubMed

    Domínguez, Marina A; Grünhut, Marcos; Pistonesi, Marcelo F; Di Nezio, María S; Centurión, María E

    2012-05-16

    An automatic flow-batch system that includes two borosilicate glass chambers to perform sample digestion and cold vapor atomic absorption spectroscopy determination of mercury in honey samples was designed. The sample digestion was performed by using a low-cost halogen lamp to obtain the optimum temperature. Optimization of the digestion procedure was done using a Box-Behnken experimental design. A linear response was observed from 2.30 to 11.20 μg Hg L(-1). The relative standard deviation was 3.20% (n = 11, 6.81 μg Hg L(-1)), the sample throughput was 4 sample h(-1), and the detection limit was 0.68 μg Hg L(-1). The obtained results with the flow-batch method are in good agreement with those obtained with the reference method. The flow-batch system is simple, allows the use of both chambers simultaneously, is seen as a promising methodology for achieving green chemistry goals, and is a good proposal to improving the quality control of honey.

  10. Self-Sealing Wet Chemistry Cell for Field Analysis

    NASA Technical Reports Server (NTRS)

    Beegle, Luther W.; Soto, Juancarlos; Lasnik, James; Roark, Shane

    2012-01-01

    In most analytical investigations, there is a need to process complex field samples for the unique detection of analytes, especially when detecting low concentration organic molecules that may identify extraterrestrial life. Wet chemistry based instruments are the techniques of choice for most laboratory- based analysis of organic molecules due to several factors including less fragmentation of fragile biomarkers, and ability to concentrate target species resulting in much lower limits of detection. Development of an automated wet chemistry preparation system that can operate autonomously on Earth and is also designed to operate under Martian ambient conditions will demonstrate the technical feasibility of including wet chemistry on future missions. An Automated Sample Processing System (ASPS) has recently been developed that receives fines, extracts organics through solvent extraction, processes the extract by removing non-organic soluble species, and delivers sample to multiple instruments for analysis (including for non-organic soluble species). The key to this system is a sample cell that can autonomously function under field conditions. As a result, a self-sealing sample cell was developed that can autonomously hermetically seal fines and powder into a container, regardless of orientation of the apparatus. The cap is designed with a beveled edge, which allows the cap to be self-righted as the capping motor engages. Each cap consists of a C-clip lock ring below a crucible O-ring that is placed into a groove cut into the sample cap.

  11. A review of blood sample handling and pre-processing for metabolomics studies.

    PubMed

    Hernandes, Vinicius Veri; Barbas, Coral; Dudzik, Danuta

    2017-09-01

    Metabolomics has been found to be applicable to a wide range of clinical studies, bringing a new era for improving clinical diagnostics, early disease detection, therapy prediction and treatment efficiency monitoring. A major challenge in metabolomics, particularly untargeted studies, is the extremely diverse and complex nature of biological specimens. Despite great advances in the field there still exist fundamental needs for considering pre-analytical variability that can introduce bias to the subsequent analytical process and decrease the reliability of the results and moreover confound final research outcomes. Many researchers are mainly focused on the instrumental aspects of the biomarker discovery process, and sample related variables sometimes seem to be overlooked. To bridge the gap, critical information and standardized protocols regarding experimental design and sample handling and pre-processing are highly desired. Characterization of a range variation among sample collection methods is necessary to prevent results misinterpretation and to ensure that observed differences are not due to an experimental bias caused by inconsistencies in sample processing. Herein, a systematic discussion of pre-analytical variables affecting metabolomics studies based on blood derived samples is performed. Furthermore, we provide a set of recommendations concerning experimental design, collection, pre-processing procedures and storage conditions as a practical review that can guide and serve for the standardization of protocols and reduction of undesirable variation. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Methodological quality of behavioural weight loss studies: a systematic review

    PubMed Central

    Lemon, S. C.; Wang, M. L.; Haughton, C. F.; Estabrook, D. P.; Frisard, C. F.; Pagoto, S. L.

    2018-01-01

    Summary This systematic review assessed the methodological quality of behavioural weight loss intervention studies conducted among adults and associations between quality and statistically significant weight loss outcome, strength of intervention effectiveness and sample size. Searches for trials published between January, 2009 and December, 2014 were conducted using PUBMED, MEDLINE and PSYCINFO and identified ninety studies. Methodological quality indicators included study design, anthropometric measurement approach, sample size calculations, intent-to-treat (ITT) analysis, loss to follow-up rate, missing data strategy, sampling strategy, report of treatment receipt and report of intervention fidelity (mean = 6.3). Indicators most commonly utilized included randomized design (100%), objectively measured anthropometrics (96.7%), ITT analysis (86.7%) and reporting treatment adherence (76.7%). Most studies (62.2%) had a follow-up rate >75% and reported a loss to follow-up analytic strategy or minimal missing data (69.9%). Describing intervention fidelity (34.4%) and sampling from a known population (41.1%) were least common. Methodological quality was not associated with reporting a statistically significant result, effect size or sample size. This review found the published literature of behavioural weight loss trials to be of high quality for specific indicators, including study design and measurement. Identified for improvement include utilization of more rigorous statistical approaches to loss to follow up and better fidelity reporting. PMID:27071775

  13. Nationwide Drinking Water Sampling Campaign for Exposure Assessments in Denmark

    PubMed Central

    Voutchkova, Denitza Dimitrova; Hansen, Birgitte; Ernstsen, Vibeke; Kristiansen, Søren Munch

    2018-01-01

    Nationwide sampling campaign of treated drinking water of groundwater origin was designed and implemented in Denmark in 2013. The main purpose of the sampling was to obtain data on the spatial variation of iodine concentration and speciation in treated drinking water, which was supplied to the majority of the Danish population. This data was to be used in future exposure and epidemiologic studies. The water supply sector (83 companies, owning 144 waterworks throughout Denmark) was involved actively in the planning and implementation process, which reduced significantly the cost and duration of data collection. The dataset resulting from this collaboration covers not only iodine species (I−, IO3−, TI), but also major elements and parameters (pH, electrical conductivity, DOC, TC, TN, F−, Cl−, NO3−, SO42−, Ca2+, Mg2+, K+, Na+) and a long list of trace elements (n = 66). The water samples represent 144 waterworks abstracting about 45% of the annual Danish groundwater abstraction for drinking water purposes, which supply about 2.5 million Danes (45% of all Danish residents). This technical note presents the design, implementation, and limitations of such a sampling design in detail in order (1) to facilitate the future use of this dataset, (2) to inform future replication studies, or (3) to provide an example for other researchers. PMID:29518987

  14. Design and Elementary Evaluation of a Highly-Automated Fluorescence-Based Instrument System for On-Site Detection of Food-Borne Pathogens

    PubMed Central

    Lu, Zhan; Zhang, Jianyi; Xu, Lizhou; Li, Yanbin; Chen, Siyu; Ye, Zunzhong; Wang, Jianping

    2017-01-01

    A simple, highly-automated instrument system used for on-site detection of foodborne pathogens based on fluorescence was designed, fabricated, and preliminarily tested in this paper. A corresponding method has been proved effective in our previous studies. This system utilizes a light-emitting diode (LED) to excite fluorescent labels and a spectrometer to record the fluorescence signal from samples. A rotation stage for positioning and switching samples was innovatively designed for high-throughput detection, ten at most in one single run. We also developed software based on LabVIEW for data receiving, processing, and the control of the whole system. In the test of using a pure quantum dot (QD) solution as a standard sample, detection results from this home-made system were highly-relevant with that from a well-commercialized product and even slightly better reproducibility was found. And in the test of three typical kinds of food-borne pathogens, fluorescence signals recorded by this system are highly proportional to the variation of the sample concentration, with a satisfied limit of detection (LOD) (nearly 102–103 CFU·mL−1 in food samples). Additionally, this instrument system is low-cost and easy-to-use, showing a promising potential for on-site rapid detection of food-borne pathogens. PMID:28241478

  15. Design and Elementary Evaluation of a Highly-Automated Fluorescence-Based Instrument System for On-Site Detection of Food-Borne Pathogens.

    PubMed

    Lu, Zhan; Zhang, Jianyi; Xu, Lizhou; Li, Yanbin; Chen, Siyu; Ye, Zunzhong; Wang, Jianping

    2017-02-23

    A simple, highly-automated instrument system used for on-site detection of foodborne pathogens based on fluorescence was designed, fabricated, and preliminarily tested in this paper. A corresponding method has been proved effective in our previous studies. This system utilizes a light-emitting diode (LED) to excite fluorescent labels and a spectrometer to record the fluorescence signal from samples. A rotation stage for positioning and switching samples was innovatively designed for high-throughput detection, ten at most in one single run. We also developed software based on LabVIEW for data receiving, processing, and the control of the whole system. In the test of using a pure quantum dot (QD) solution as a standard sample, detection results from this home-made system were highly-relevant with that from a well-commercialized product and even slightly better reproducibility was found. And in the test of three typical kinds of food-borne pathogens, fluorescence signals recorded by this system are highly proportional to the variation of the sample concentration, with a satisfied limit of detection (LOD) (nearly 10²-10³ CFU·mL -1 in food samples). Additionally, this instrument system is low-cost and easy-to-use, showing a promising potential for on-site rapid detection of food-borne pathogens.

  16. A Novel Analysis Method for Paired-Sample Microbial Ecology Experiments.

    PubMed

    Olesen, Scott W; Vora, Suhani; Techtmann, Stephen M; Fortney, Julian L; Bastidas-Oyanedel, Juan R; Rodríguez, Jorge; Hazen, Terry C; Alm, Eric J

    2016-01-01

    Many microbial ecology experiments use sequencing data to measure a community's response to an experimental treatment. In a common experimental design, two units, one control and one experimental, are sampled before and after the treatment is applied to the experimental unit. The four resulting samples contain information about the dynamics of organisms that respond to the treatment, but there are no analytical methods designed to extract exactly this type of information from this configuration of samples. Here we present an analytical method specifically designed to visualize and generate hypotheses about microbial community dynamics in experiments that have paired samples and few or no replicates. The method is based on the Poisson lognormal distribution, long studied in macroecology, which we found accurately models the abundance distribution of taxa counts from 16S rRNA surveys. To demonstrate the method's validity and potential, we analyzed an experiment that measured the effect of crude oil on ocean microbial communities in microcosm. Our method identified known oil degraders as well as two clades, Maricurvus and Rhodobacteraceae, that responded to amendment with oil but do not include known oil degraders. Our approach is sensitive to organisms that increased in abundance only in the experimental unit but less sensitive to organisms that increased in both control and experimental units, thus mitigating the role of "bottle effects".

  17. Sample size requirements for separating out the effects of combination treatments: Randomised controlled trials of combination therapy vs. standard treatment compared to factorial designs for patients with tuberculous meningitis

    PubMed Central

    2011-01-01

    Background In certain diseases clinical experts may judge that the intervention with the best prospects is the addition of two treatments to the standard of care. This can either be tested with a simple randomized trial of combination versus standard treatment or with a 2 × 2 factorial design. Methods We compared the two approaches using the design of a new trial in tuberculous meningitis as an example. In that trial the combination of 2 drugs added to standard treatment is assumed to reduce the hazard of death by 30% and the sample size of the combination trial to achieve 80% power is 750 patients. We calculated the power of corresponding factorial designs with one- to sixteen-fold the sample size of the combination trial depending on the contribution of each individual drug to the combination treatment effect and the strength of an interaction between the two. Results In the absence of an interaction, an eight-fold increase in sample size for the factorial design as compared to the combination trial is required to get 80% power to jointly detect effects of both drugs if the contribution of the less potent treatment to the total effect is at least 35%. An eight-fold sample size increase also provides a power of 76% to detect a qualitative interaction at the one-sided 10% significance level if the individual effects of both drugs are equal. Factorial designs with a lower sample size have a high chance to be underpowered, to show significance of only one drug even if both are equally effective, and to miss important interactions. Conclusions Pragmatic combination trials of multiple interventions versus standard therapy are valuable in diseases with a limited patient pool if all interventions test the same treatment concept, it is considered likely that either both or none of the individual interventions are effective, and only moderate drug interactions are suspected. An adequately powered 2 × 2 factorial design to detect effects of individual drugs would require at least 8-fold the sample size of the combination trial. Trial registration Current Controlled Trials ISRCTN61649292 PMID:21288326

  18. Accounting for selection bias in association studies with complex survey data.

    PubMed

    Wirth, Kathleen E; Tchetgen Tchetgen, Eric J

    2014-05-01

    Obtaining representative information from hidden and hard-to-reach populations is fundamental to describe the epidemiology of many sexually transmitted diseases, including HIV. Unfortunately, simple random sampling is impractical in these settings, as no registry of names exists from which to sample the population at random. However, complex sampling designs can be used, as members of these populations tend to congregate at known locations, which can be enumerated and sampled at random. For example, female sex workers may be found at brothels and street corners, whereas injection drug users often come together at shooting galleries. Despite the logistical appeal, complex sampling schemes lead to unequal probabilities of selection, and failure to account for this differential selection can result in biased estimates of population averages and relative risks. However, standard techniques to account for selection can lead to substantial losses in efficiency. Consequently, researchers implement a variety of strategies in an effort to balance validity and efficiency. Some researchers fully or partially account for the survey design, whereas others do nothing and treat the sample as a realization of the population of interest. We use directed acyclic graphs to show how certain survey sampling designs, combined with subject-matter considerations unique to individual exposure-outcome associations, can induce selection bias. Finally, we present a novel yet simple maximum likelihood approach for analyzing complex survey data; this approach optimizes statistical efficiency at no cost to validity. We use simulated data to illustrate this method and compare it with other analytic techniques.

  19. Random versus fixed-site sampling when monitoring relative abundance of fishes in headwater streams of the upper Colorado River basin

    USGS Publications Warehouse

    Quist, M.C.; Gerow, K.G.; Bower, M.R.; Hubert, W.A.

    2006-01-01

    Native fishes of the upper Colorado River basin (UCRB) have declined in distribution and abundance due to habitat degradation and interactions with normative fishes. Consequently, monitoring populations of both native and nonnative fishes is important for conservation of native species. We used data collected from Muddy Creek, Wyoming (2003-2004), to compare sample size estimates using a random and a fixed-site sampling design to monitor changes in catch per unit effort (CPUE) of native bluehead suckers Catostomus discobolus, flannelmouth suckers C. latipinnis, roundtail chub Gila robusta, and speckled dace Rhinichthys osculus, as well as nonnative creek chub Semotilus atromaculatus and white suckers C. commersonii. When one-pass backpack electrofishing was used, detection of 10% or 25% changes in CPUE (fish/100 m) at 60% statistical power required 50-1,000 randomly sampled reaches among species regardless of sampling design. However, use of a fixed-site sampling design with 25-50 reaches greatly enhanced the ability to detect changes in CPUE. The addition of seining did not appreciably reduce required effort. When detection of 25-50% changes in CPUE of native and nonnative fishes is acceptable, we recommend establishment of 25-50 fixed reaches sampled by one-pass electrofishing in Muddy Creek. Because Muddy Creek has habitat and fish assemblages characteristic of other headwater streams in the UCRB, our results are likely to apply to many other streams in the basin. ?? Copyright by the American Fisheries Society 2006.

  20. Design of experiments for amino acid extraction from tobacco leaves and their subsequent determination by capillary zone electrophoresis.

    PubMed

    Hodek, Ondřej; Křížek, Tomáš; Coufal, Pavel; Ryšlavá, Helena

    2017-03-01

    In this study, we optimized a method for the determination of free amino acids in Nicotiana tabacum leaves. Capillary electrophoresis with contactless conductivity detector was used for the separation of 20 proteinogenic amino acids in acidic background electrolyte. Subsequently, the conditions of extraction with HCl were optimized for the highest extraction yield of the amino acids because sample treatment of plant materials brings some specific challenges. Central composite face-centered design with fractional factorial design was used in order to evaluate the significance of selected factors (HCl volume, HCl concentration, sonication, shaking) on the extraction process. In addition, the composite design helped us to find the optimal values for each factor using the response surface method. The limits of detection and limits of quantification for the 20 proteinogenic amino acids were found to be in the order of 10 -5 and 10 -4  mol l -1 , respectively. Addition of acetonitrile to the sample was tested as a method commonly used to decrease limits of detection. Ambiguous results of this experiment pointed out some features of plant extract samples, which often required specific approaches. Suitability of the method for metabolomic studies was tested by analysis of a real sample, in which all amino acids, except for L-methionine and L-cysteine, were successfully detected. The optimized extraction process together with the capillary electrophoresis method can be used for the determination of proteinogenic amino acids in plant materials. The resulting inexpensive, simple, and robust method is well suited for various metabolomic studies in plants. As such, the method represents a valuable tool for research and practical application in the fields of biology, biochemistry, and agriculture.

  1. Drug-drug interaction predictions with PBPK models and optimal multiresponse sampling time designs: application to midazolam and a phase I compound. Part 1: comparison of uniresponse and multiresponse designs using PopDes.

    PubMed

    Chenel, Marylore; Bouzom, François; Aarons, Leon; Ogungbenro, Kayode

    2008-12-01

    To determine the optimal sampling time design of a drug-drug interaction (DDI) study for the estimation of apparent clearances (CL/F) of two co-administered drugs (SX, a phase I compound, potentially a CYP3A4 inhibitor, and MDZ, a reference CYP3A4 substrate) without any in vivo data using physiologically based pharmacokinetic (PBPK) predictions, population PK modelling and multiresponse optimal design. PBPK models were developed with AcslXtreme using only in vitro data to simulate PK profiles of both drugs when they were co-administered. Then, using simulated data, population PK models were developed with NONMEM and optimal sampling times were determined by optimizing the determinant of the population Fisher information matrix with PopDes using either two uniresponse designs (UD) or a multiresponse design (MD) with joint sampling times for both drugs. Finally, the D-optimal sampling time designs were evaluated by simulation and re-estimation with NONMEM by computing the relative root mean squared error (RMSE) and empirical relative standard errors (RSE) of CL/F. There were four and five optimal sampling times (=nine different sampling times) in the UDs for SX and MDZ, respectively, whereas there were only five sampling times in the MD. Whatever design and compound, CL/F was well estimated (RSE < 20% for MDZ and <25% for SX) and expected RSEs from PopDes were in the same range as empirical RSEs. Moreover, there was no bias in CL/F estimation. Since MD required only five sampling times compared to the two UDs, D-optimal sampling times of the MD were included into a full empirical design for the proposed clinical trial. A joint paper compares the designs with real data. This global approach including PBPK simulations, population PK modelling and multiresponse optimal design allowed, without any in vivo data, the design of a clinical trial, using sparse sampling, capable of estimating CL/F of the CYP3A4 substrate and potential inhibitor when co-administered together.

  2. Performance of small cluster surveys and the clustered LQAS design to estimate local-level vaccination coverage in Mali

    PubMed Central

    2012-01-01

    Background Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. Methods We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. Results VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Conclusions Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes. PMID:23057445

  3. Water- and air-quality monitoring of the Sweetwater Reservoir Watershed, San Diego County, California-Phase One results, continued, 1999-2001

    USGS Publications Warehouse

    Mendez, Gregory O.; Foreman, William T.; Sidhu, Jagdeep S.; Majewski, Michael S.

    2007-01-01

    In 1998, the U.S. Geological Survey, in cooperation with the Sweetwater Authority, began a study to assess the overall health of the Sweetwater watershed with respect to chemical contamination. The study included regular sampling of air and water at Sweetwater Reservoir for chemical contaminants, including volatile organic compounds, polycyclic aromatic hydrocarbons, pesticides, and major and trace elements. Background water samples were collected at Loveland Reservoir for volatile organic compounds and pesticides. The purpose of this study was to monitor changes in contaminant composition and concentration in the air and water resulting from the construction and operation of State Route 125 near Sweetwater Reservoir. To accomplish this, the study was divided into two phases. Phase One sampling was designed to establish baseline conditions for target compounds in terms of detection frequency and concentration in air and water. Phase Two sampling is planned to continue at the established monitoring sites during and after construction of State Route 125 to assess the chemical impact this roadway alignment project may have on the water quality in the reservoir. In addition to the ongoing data collection, several special studies were initiated to assess the occurrence of specific chemicals of concern, such as low-use pesticides, trace metals, and wastewater compounds. This report describes the study design, and the sampling and analytical methods, and presents the results for the second and third years of the study (October 1999 to September 2001). Data collected during the first year of sampling (October 1998 to September 1999) were published in 2002.

  4. Rapid plant diversity assessment using a pixel nested plot design: A case study in Beaver Meadows, Rocky Mountain National Park, Colorado, USA

    USGS Publications Warehouse

    Kalkhan, M.A.; Stafford, E.J.; Stohlgren, T.J.

    2007-01-01

    Geospatial statistical modelling and thematic maps have recently emerged as effective tools for the management of natural areas at the landscape scale. Traditional methods for the collection of field data pertaining to questions of landscape were developed without consideration for the parameters of these applications. We introduce an alternative field sampling design based on smaller unbiased random plot and subplot locations called the pixel nested plot (PNP). We demonstrate the applicability of the PNP design of 15 m x 15 m to assess patterns of plant diversity and species richness across the landscape at Rocky Mountain National Park (RMNP), Colorado, USA in a time (cost)-efficient manner for field data collection. Our results produced comparable results to a previous study in the Beaver Meadow study (BMS) area within RMNP, where there was a demonstrated focus of plant diversity. Our study used the smaller PNP sampling design for field data collection which could be linked to geospatial information data and could be used for landscape-scale analyses and assessment applications. In 2003, we established 61 PNP in the eastern region of RMNP. We present a comparison between this approach using a sub-sample of 19 PNP from this data set and 20 of Modified Whittaker nested plots (MWNP) of 20 m x 50 m that were collected in the BMS area. The PNP captured 266 unique plant species while the MWNP captured 275 unique species. Based on a comparison of PNP and MWNP in the Beaver Meadows area, RMNP, the PNP required less time and area sampled to achieve a similar number of species sampled. Using the PNP approach for data collection can facilitate the ecological monitoring of these vulnerable areas at the landscape scale in a time- and therefore cost-effective manner. ?? 2007 The Authors.

  5. Miniature objective lens for array digital pathology: design improvement based on clinical evaluation

    NASA Astrophysics Data System (ADS)

    McCall, Brian; Pierce, Mark; Graviss, Edward A.; Richards-Kortum, Rebecca R.; Tkaczyk, Tomasz S.

    2016-03-01

    A miniature objective designed for digital detection of Mycobacterium tuberculosis (MTB) was evaluated for diagnostic accuracy. The objective was designed for array microscopy, but fabricated and evaluated at this stage of development as a single objective. The counts and diagnoses of patient samples were directly compared for digital detection and standard microscopy. The results were found to be correlated and highly concordant. The evaluation of this lens by direct comparison to standard fluorescence sputum smear microscopy presented unique challenges and led to some new insights in the role played by the system parameters of the microscope. The design parameters and how they were developed are reviewed in light of these results. New system parameters are proposed with the goal of easing the challenges of evaluating the miniature objective and maintaining the optical performance that produced the agreeable results presented without over-optimizing. A new design is presented that meets and exceeds these criteria.

  6. Water-quality trend analysis and sampling design for the Devils Lake Basin, North Dakota, January 1965 through September 2003

    USGS Publications Warehouse

    Ryberg, Karen R.; Vecchia, Aldo V.

    2006-01-01

    This report presents the results of a study conducted by the U.S. Geological Survey, in cooperation with the North Dakota State Water Commission, the Devils Lake Basin Joint Water Resource Board, and the Red River Joint Water Resource District, to analyze historical water-quality trends in three dissolved major ions, three nutrients, and one dissolved trace element for eight stations in the Devils Lake Basin in North Dakota and to develop an efficient sampling design to monitor the future trends. A multiple-regression model was used to detect and remove streamflow-related variability in constituent concentrations. To separate the natural variability in concentration as a result of variability in streamflow from the variability in concentration as a result of other factors, the base-10 logarithm of daily streamflow was divided into four components-a 5-year streamflow anomaly, an annual streamflow anomaly, a seasonal streamflow anomaly, and a daily streamflow anomaly. The constituent concentrations then were adjusted for streamflow-related variability by removing the 5-year, annual, seasonal, and daily variability. Constituents used for the water-quality trend analysis were evaluated for a step trend to examine the effect of Channel A on water quality in the basin and a linear trend to detect gradual changes with time from January 1980 through September 2003. The fitted upward linear trends for dissolved calcium concentrations during 1980-2003 for two stations were significant. The fitted step trends for dissolved sulfate concentrations for three stations were positive and similar in magnitude. Of the three upward trends, one was significant. The fitted step trends for dissolved chloride concentrations were positive but insignificant. The fitted linear trends for the upstream stations were small and insignificant, but three of the downward trends that occurred during 1980-2003 for the remaining stations were significant. The fitted upward linear trends for dissolved nitrite plus nitrate as nitrogen concentrations during 1987-2003 for two stations were significant. However, concentrations during recent years appear to be lower than those for the 1970s and early 1980s but higher than those for the late 1980s and early 1990s. The fitted downward linear trend for dissolved ammonia concentrations for one station was significant. The fitted linear trends for total phosphorus concentrations for two stations were significant. Upward trends for total phosphorus concentrations occurred from the late 1980s to 2003 for most stations, but a small and insignificant downward trend occurred for one station. Continued monitoring will be needed to determine if the recent trend toward higher dissolved nitrite plus nitrate as nitrogen and total phosphorus concentrations continues in the future. For continued monitoring of water-quality trends in the upper Devils Lake Basin, an efficient sampling design consists of five major-ion, nutrient, and trace-element samples per year at three existing stream stations and at three existing lake stations. This sampling design requires the collection of 15 stream samples and 15 lake samples per year rather than 16 stream samples and 20 lake samples per year as in the 1992-2003 program. Thus, the design would result in a program that is less costly and more efficient than the 1992-2003 program but that still would provide the data needed to monitor water-quality trends in the Devils Lake Basin.

  7. Establishing Interpretive Consistency When Mixing Approaches: Role of Sampling Designs in Evaluations

    ERIC Educational Resources Information Center

    Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.

    2013-01-01

    The goal of this chapter is to recommend quality criteria to guide evaluators' selections of sampling designs when mixing approaches. First, we contextualize our discussion of quality criteria and sampling designs by discussing the concept of interpretive consistency and how it impacts sampling decisions. Embedded in this discussion are…

  8. Prevalence of Mixed-Methods Sampling Designs in Social Science Research

    ERIC Educational Resources Information Center

    Collins, Kathleen M. T.

    2006-01-01

    The purpose of this mixed-methods study was to document the prevalence of sampling designs utilised in mixed-methods research and to examine the interpretive consistency between interpretations made in mixed-methods studies and the sampling design used. Classification of studies was based on a two-dimensional mixed-methods sampling model. This…

  9. Are there Benefits to Combining Regional Probabalistic Survey and Historic Targeted Environmental Monitoring Data to Improve Our Understanding of Overall Regional Estuary Environmental Status?

    NASA Astrophysics Data System (ADS)

    Dasher, D. H.; Lomax, T. J.; Bethe, A.; Jewett, S.; Hoberg, M.

    2016-02-01

    A regional probabilistic survey of 20 randomly selected stations, where water and sediments were sampled, was conducted over an area of Simpson Lagoon and Gwydyr Bay in the Beaufort Sea adjacent Prudhoe Bay, Alaska, in 2014. Sampling parameters included water column for temperature, salinity, dissolved oxygen, chlorophyll a, nutrients and sediments for macroinvertebrates, chemistry, i.e., trace metals and hydrocarbons, and grain size. The 2014 probabilistic survey design allows for inferences to be made of environmental status, for instance the spatial or aerial distribution of sediment trace metals within the design area sampled. Historically, since the 1970's a number of monitoring studies have been conducted in this estuary area using a targeted rather than regional probabilistic design. Targeted non-random designs were utilized to assess specific points of interest and cannot be used to make inferences to distributions of environmental parameters. Due to differences in the environmental monitoring objectives between probabilistic and targeted designs there has been limited assessment see if benefits exist to combining the two approaches. This study evaluates if a combined approach using the 2014 probabilistic survey sediment trace metal and macroinvertebrate results and historical targeted monitoring data can provide a new perspective on better understanding the environmental status of these estuaries.

  10. Survival distributions impact the power of randomized placebo-phase design and parallel groups randomized clinical trials.

    PubMed

    Abrahamyan, Lusine; Li, Chuan Silvia; Beyene, Joseph; Willan, Andrew R; Feldman, Brian M

    2011-03-01

    The study evaluated the power of the randomized placebo-phase design (RPPD)-a new design of randomized clinical trials (RCTs), compared with the traditional parallel groups design, assuming various response time distributions. In the RPPD, at some point, all subjects receive the experimental therapy, and the exposure to placebo is for only a short fixed period of time. For the study, an object-oriented simulation program was written in R. The power of the simulated trials was evaluated using six scenarios, where the treatment response times followed the exponential, Weibull, or lognormal distributions. The median response time was assumed to be 355 days for the placebo and 42 days for the experimental drug. Based on the simulation results, the sample size requirements to achieve the same level of power were different under different response time to treatment distributions. The scenario where the response times followed the exponential distribution had the highest sample size requirement. In most scenarios, the parallel groups RCT had higher power compared with the RPPD. The sample size requirement varies depending on the underlying hazard distribution. The RPPD requires more subjects to achieve a similar power to the parallel groups design. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Design of a novel system for spectroscopy measurements of the aqueous humor

    NASA Astrophysics Data System (ADS)

    Miller, Joe; Uttamchandani, Deepak G.

    2001-06-01

    The authors report on the design of a system which will enable real time measurements of (therapeutic) drug concentrations in the anterior chamber of the eye. Currently the concentration of therapeutic drugs in the anterior chamber is determined by analyzing samples which have been removed from the aqueous humor of laboratory animal eyes. This sampling via paracentesis can be painful and does not provide a continuous measurement. Our system will be far less invasive, removing the need for sampling via paracentesis, and also providing a continuous measurement, enabling a more complete understanding of the kinetics of ophthalmic drugs. A key component in our novel system is a specially constructed contact lens. We report on the design, optimization and manufacture of such a contact lens system capable of directing UV/VIS light in, across and out of the anterior chamber of the eye, thereby enabling absorption spectroscopy measurements of the aqueous humor to be undertaken. Design of the one piece contact lens/mirror system was achieved using the Zemax optical design software package and the lens was fabricated from synthetic fused silica. Results from modeling of the lens and experimental measurements on light propagation across the anterior chamber of animal eyes assisted by the lens will be reported.

  12. Measurements with an airborne, autotracking, external-head sunphotometer

    NASA Technical Reports Server (NTRS)

    Russell, P. B.; Matsumoto, T.; Banta, V. J.; Mina, C.; Colburn, D. S.; Pueschel, R. F.; Livingston, J. M.

    1986-01-01

    Design and performance features and sample results from use of a NASA airborne tracking sunphotometer (ATS) are described. The ATS was devised to obtain continuous vertical profiles of the optical depth and transmissivity, first from a CV-990 aircraft and then from a modified DC-8 aircraft. Sample results are presented from a 1985 flight as part of the SAGE-II calibration mission, which featured detectors frequencies of 380, 450, 600, 860, 940, and 1020 microns and covered flight altitudes from ground to 10 km.

  13. Simulations and experiments on RITA-2 at PSI

    NASA Astrophysics Data System (ADS)

    Klausen, S. N.; Lefmann, K.; McMorrow, D. F.; Altorfer, F.; Janssen, S.; Lüthy, M.

    The cold-neutron triple-axis spectrometer RITA-2 designed and built at Riso National Laboratory was installed at the neutron source SINQ at Paul Scherrer Institute in April/May 2001. In connection with the installation of RITA-2, computer simulations were performed using the neutron ray-tracing package McStas. The simulation results are compared to real experimental results obtained with a powder sample. Especially, the flux at the sample position and the resolution function of the spectrometer are investigated.

  14. A nonparametric method to generate synthetic populations to adjust for complex sampling design features.

    PubMed

    Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E

    2014-06-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.

  15. A nonparametric method to generate synthetic populations to adjust for complex sampling design features

    PubMed Central

    Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.

    2017-01-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608

  16. ATMAD: robust image analysis for Automatic Tissue MicroArray De-arraying.

    PubMed

    Nguyen, Hoai Nam; Paveau, Vincent; Cauchois, Cyril; Kervrann, Charles

    2018-04-19

    Over the last two decades, an innovative technology called Tissue Microarray (TMA), which combines multi-tissue and DNA microarray concepts, has been widely used in the field of histology. It consists of a collection of several (up to 1000 or more) tissue samples that are assembled onto a single support - typically a glass slide - according to a design grid (array) layout, in order to allow multiplex analysis by treating numerous samples under identical and standardized conditions. However, during the TMA manufacturing process, the sample positions can be highly distorted from the design grid due to the imprecision when assembling tissue samples and the deformation of the embedding waxes. Consequently, these distortions may lead to severe errors of (histological) assay results when the sample identities are mismatched between the design and its manufactured output. The development of a robust method for de-arraying TMA, which localizes and matches TMA samples with their design grid, is therefore crucial to overcome the bottleneck of this prominent technology. In this paper, we propose an Automatic, fast and robust TMA De-arraying (ATMAD) approach dedicated to images acquired with brightfield and fluorescence microscopes (or scanners). First, tissue samples are localized in the large image by applying a locally adaptive thresholding on the isotropic wavelet transform of the input TMA image. To reduce false detections, a parametric shape model is considered for segmenting ellipse-shaped objects at each detected position. Segmented objects that do not meet the size and the roundness criteria are discarded from the list of tissue samples before being matched with the design grid. Sample matching is performed by estimating the TMA grid deformation under the thin-plate model. Finally, thanks to the estimated deformation, the true tissue samples that were preliminary rejected in the early image processing step are recognized by running a second segmentation step. We developed a novel de-arraying approach for TMA analysis. By combining wavelet-based detection, active contour segmentation, and thin-plate spline interpolation, our approach is able to handle TMA images with high dynamic, poor signal-to-noise ratio, complex background and non-linear deformation of TMA grid. In addition, the deformation estimation produces quantitative information to asset the manufacturing quality of TMAs.

  17. Assessment of long-term gas sampling design at two commercial manure-belt layer barns.

    PubMed

    Chai, Li-Long; Ni, Ji-Qin; Chen, Yan; Diehl, Claude A; Heber, Albert J; Lim, Teng T

    2010-06-01

    Understanding temporal and spatial variations of aerial pollutant concentrations is important for designing air quality monitoring systems. In long-term and continuous air quality monitoring in large livestock and poultry barns, these systems usually use location-shared analyzers and sensors and can only sample air at limited number of locations. To assess the validity of the gas sampling design at a commercial layer farm, a new methodology was developed to map pollutant gas concentrations using portable sensors under steady-state or quasi-steady-state barn conditions. Three assessment tests were conducted from December 2008 to February 2009 in two manure-belt layer barns. Each barn was 140.2 m long and 19.5 m wide and had 250,000 birds. Each test included four measurements of ammonia and carbon dioxide concentrations at 20 locations that covered all operating fans, including six of the fans used in the long-term sampling that represented three zones along the lengths of the barns, to generate data for complete-barn monitoring. To simulate the long-term monitoring, gas concentrations from the six long-term sampling locations were extracted from the 20 assessment locations. Statistical analyses were performed to test the variances (F-test) and sample means (t test) between the 6- and 20-sample data. The study clearly demonstrated ammonia and carbon dioxide concentration gradients that were characterized by increasing concentrations from the west to east ends of the barns following the under-cage manure-belt travel direction. Mean concentrations increased from 7.1 to 47.7 parts per million (ppm) for ammonia and from 2303 to 3454 ppm for carbon dioxide from the west to east of the barns. Variations of mean gas concentrations were much less apparent between the south and north sides of the barns, because they were 21.2 and 20.9 ppm for ammonia and 2979 and 2951 ppm for carbon dioxide, respectively. The null hypotheses that the variances and means between the 6- and 20-sample data were equal at alpha = 0.05 (P > 0.05) were accepted for both gases. The results proved that the long-term gas sampling design was valid in this instance and suggested that the gas sampling design in these two barns was one of the best on the basis of available long-term monitoring instrumentation at reasonable cost.

  18. The Marshall Islands Data Management Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoker, A.C.; Conrado, C.L.

    1995-09-01

    This report is a resource document of the methods and procedures used currently in the Data Management Program of the Marshall Islands Dose Assessment and Radioecology Project. Since 1973, over 60,000 environmental samples have been collected. Our program includes relational database design, programming and maintenance; sample and information management; sample tracking; quality control; and data entry, evaluation and reduction. The usefulness of scientific databases involves careful planning in order to fulfill the requirements of any large research program. Compilation of scientific results requires consolidation of information from several databases, and incorporation of new information as it is generated. The successmore » in combining and organizing all radionuclide analysis, sample information and statistical results into a readily accessible form, is critical to our project.« less

  19. Assessment of wadeable stream resources in the driftless area ecoregion in Western Wisconsin using a probabilistic sampling design.

    PubMed

    Miller, Michael A; Colby, Alison C C; Kanehl, Paul D; Blocksom, Karen

    2009-03-01

    The Wisconsin Department of Natural Resources (WDNR), with support from the U.S. EPA, conducted an assessment of wadeable streams in the Driftless Area ecoregion in western Wisconsin using a probabilistic sampling design. This ecoregion encompasses 20% of Wisconsin's land area and contains 8,800 miles of perennial streams. Randomly-selected stream sites (n = 60) equally distributed among stream orders 1-4 were sampled. Watershed land use, riparian and in-stream habitat, water chemistry, macroinvertebrate, and fish assemblage data were collected at each true random site and an associated "modified-random" site on each stream that was accessed via a road crossing nearest to the true random site. Targeted least-disturbed reference sites (n = 22) were also sampled to develop reference conditions for various physical, chemical, and biological measures. Cumulative distribution function plots of various measures collected at the true random sites evaluated with reference condition thresholds, indicate that high proportions of the random sites (and by inference the entire Driftless Area wadeable stream population) show some level of degradation. Study results show no statistically significant differences between the true random and modified-random sample sites for any of the nine physical habitat, 11 water chemistry, seven macroinvertebrate, or eight fish metrics analyzed. In Wisconsin's Driftless Area, 79% of wadeable stream lengths were accessible via road crossings. While further evaluation of the statistical rigor of using a modified-random sampling design is warranted, sampling randomly-selected stream sites accessed via the nearest road crossing may provide a more economical way to apply probabilistic sampling in stream monitoring programs.

  20. GENENG: A program for calculating design and off-design performance for turbojet and turbofan engines

    NASA Technical Reports Server (NTRS)

    Koenig, R. W.; Fishbach, L. H.

    1972-01-01

    A computer program entitled GENENG employs component performance maps to perform analytical, steady state, engine cycle calculations. Through a scaling procedure, each of the component maps can be used to represent a family of maps (different design values of pressure ratios, efficiency, weight flow, etc.) Either convergent or convergent-divergent nozzles may be used. Included is a complete FORTRAN 4 listing of the program. Sample results and input explanations are shown for one-spool and two-spool turbojets and two-spool separate- and mixed-flow turbofans operating at design and off-design conditions.

  1. Early Adolescent Depressive Mood: Direct and Indirect Effects of Attributional Styles and Coping

    ERIC Educational Resources Information Center

    Chan, Siu Mui

    2012-01-01

    The present study used a cross-sectional survey design to examine how adolescent depressive mood was related to attributional styles and coping strategies with a sample of 326 youths (aged 8-14 years). With the cutting point adopted in the West, 20.9% of the current sample reported depressive symptoms. Regression analysis results show that, with…

  2. How Much Videos Win over Audios in Listening Instruction for EFL Learners

    ERIC Educational Resources Information Center

    Yasin, Burhanuddin; Mustafa, Faisal; Permatasari, Rizki

    2017-01-01

    This study aims at comparing the benefits of using videos instead of audios for improving students' listening skills. This experimental study used a pre-test and post-test control group design. The sample, selected by cluster random sampling resulted in the selection of 32 second year high school students for each group. The instruments used were…

  3. Defining And Characterizing Sample Representativeness For DWPF Melter Feed Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shine, E. P.; Poirier, M. R.

    2013-10-29

    Representative sampling is important throughout the Defense Waste Processing Facility (DWPF) process, and the demonstrated success of the DWPF process to achieve glass product quality over the past two decades is a direct result of the quality of information obtained from the process. The objective of this report was to present sampling methods that the Savannah River Site (SRS) used to qualify waste being dispositioned at the DWPF. The goal was to emphasize the methodology, not a list of outcomes from those studies. This methodology includes proven methods for taking representative samples, the use of controlled analytical methods, and datamore » interpretation and reporting that considers the uncertainty of all error sources. Numerous sampling studies were conducted during the development of the DWPF process and still continue to be performed in order to evaluate options for process improvement. Study designs were based on use of statistical tools applicable to the determination of uncertainties associated with the data needs. Successful designs are apt to be repeated, so this report chose only to include prototypic case studies that typify the characteristics of frequently used designs. Case studies have been presented for studying in-tank homogeneity, evaluating the suitability of sampler systems, determining factors that affect mixing and sampling, comparing the final waste glass product chemical composition and durability to that of the glass pour stream sample and other samples from process vessels, and assessing the uniformity of the chemical composition in the waste glass product. Many of these studies efficiently addressed more than one of these areas of concern associated with demonstrating sample representativeness and provide examples of statistical tools in use for DWPF. The time when many of these designs were implemented was in an age when the sampling ideas of Pierre Gy were not as widespread as they are today. Nonetheless, the engineers and statisticians used carefully thought out designs that systematically and economically provided plans for data collection from the DWPF process. Key shared features of the sampling designs used at DWPF and the Gy sampling methodology were the specification of a standard for sample representativeness, an investigation that produced data from the process to study the sampling function, and a decision framework used to assess whether the specification was met based on the data. Without going into detail with regard to the seven errors identified by Pierre Gy, as excellent summaries are readily available such as Pitard [1989] and Smith [2001], SRS engineers understood, for example, that samplers can be biased (Gy's extraction error), and developed plans to mitigate those biases. Experiments that compared installed samplers with more representative samples obtained directly from the tank may not have resulted in systematically partitioning sampling errors into the now well-known error categories of Gy, but did provide overall information on the suitability of sampling systems. Most of the designs in this report are related to the DWPF vessels, not the large SRS Tank Farm tanks. Samples from the DWPF Slurry Mix Evaporator (SME), which contains the feed to the DWPF melter, are characterized using standardized analytical methods with known uncertainty. The analytical error is combined with the established error from sampling and processing in DWPF to determine the melter feed composition. This composition is used with the known uncertainty of the models in the Product Composition Control System (PCCS) to ensure that the wasteform that is produced is comfortably within the acceptable processing and product performance region. Having the advantage of many years of processing that meets the waste glass product acceptance criteria, the DWPF process has provided a considerable amount of data about itself in addition to the data from many special studies. Demonstrating representative sampling directly from the large Tank Farm tanks is a difficult, if not unsolvable enterprise due to limited accessibility. However, the consistency and the adequacy of sampling and mixing at SRS could at least be studied under the controlled process conditions based on samples discussed by Ray and others [2012a] in Waste Form Qualification Report (WQR) Volume 2 and the transfers from Tanks 40H and 51H to the Sludge Receipt and Adjustment Tank (SRAT) within DWPF. It is important to realize that the need for sample representativeness becomes more stringent as the material gets closer to the melter, and the tanks within DWPF have been studied extensively to meet those needs.« less

  4. Journal: A Review of Some Tracer-Test Design Equations for ...

    EPA Pesticide Factsheets

    Determination of necessary tracer mass, initial sample-collection time, and subsequent sample-collection frequency are the three most difficult aspects to estimate for a proposed tracer test prior to conducting the tracer test. To facilitate tracer-mass estimation, 33 mass-estimation equations are reviewed here, 32 of which were evaluated using previously published tracer-test design examination parameters. Comparison of the results produced a wide range of estimated tracer mass, but no means is available by which one equation may be reasonably selected over the others. Each equation produces a simple approximation for tracer mass. Most of the equations are based primarily on estimates or measurements of discharge, transport distance, and suspected transport times. Although the basic field parameters commonly employed are appropriate for estimating tracer mass, the 33 equations are problematic in that they were all probably based on the original developers' experience in a particular field area and not necessarily on measured hydraulic parameters or solute-transport theory. Suggested sampling frequencies are typically based primarily on probable transport distance, but with little regard to expected travel times. This too is problematic in that tends to result in false negatives or data aliasing. Simulations from the recently developed efficient hydrologic tracer-test design methodology (EHTD) were compared with those obtained from 32 of the 33 published tracer-

  5. Classifier-Guided Sampling for Complex Energy System Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backlund, Peter B.; Eddy, John P.

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of omore » bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.« less

  6. Minimizing the Maximum Expected Sample Size in Two-Stage Phase II Clinical Trials with Continuous Outcomes

    PubMed Central

    Wason, James M. S.; Mander, Adrian P.

    2012-01-01

    Two-stage designs are commonly used for Phase II trials. Optimal two-stage designs have the lowest expected sample size for a specific treatment effect, for example, the null value, but can perform poorly if the true treatment effect differs. Here we introduce a design for continuous treatment responses that minimizes the maximum expected sample size across all possible treatment effects. The proposed design performs well for a wider range of treatment effects and so is useful for Phase II trials. We compare the design to a previously used optimal design and show it has superior expected sample size properties. PMID:22651118

  7. Color filter array design based on a human visual model

    NASA Astrophysics Data System (ADS)

    Parmar, Manu; Reeves, Stanley J.

    2004-05-01

    To reduce cost and complexity associated with registering multiple color sensors, most consumer digital color cameras employ a single sensor. A mosaic of color filters is overlaid on a sensor array such that only one color channel is sampled per pixel location. The missing color values must be reconstructed from available data before the image is displayed. The quality of the reconstructed image depends fundamentally on the array pattern and the reconstruction technique. We present a design method for color filter array patterns that use red, green, and blue color channels in an RGB array. A model of the human visual response for luminance and opponent chrominance channels is used to characterize the perceptual error between a fully sampled and a reconstructed sparsely-sampled image. Demosaicking is accomplished using Wiener reconstruction. To ensure that the error criterion reflects perceptual effects, reconstruction is done in a perceptually uniform color space. A sequential backward selection algorithm is used to optimize the error criterion to obtain the sampling arrangement. Two different types of array patterns are designed: non-periodic and periodic arrays. The resulting array patterns outperform commonly used color filter arrays in terms of the error criterion.

  8. [Application of Fourier amplitude sensitivity test in Chinese healthy volunteer population pharmacokinetic model of tacrolimus].

    PubMed

    Guan, Zheng; Zhang, Guan-min; Ma, Ping; Liu, Li-hong; Zhou, Tian-yan; Lu, Wei

    2010-07-01

    In this study, we evaluated the influence of different variance from each of the parameters on the output of tacrolimus population pharmacokinetic (PopPK) model in Chinese healthy volunteers, using Fourier amplitude sensitivity test (FAST). Besides, we estimated the index of sensitivity within whole course of blood sampling, designed different sampling times, and evaluated the quality of parameters' and the efficiency of prediction. It was observed that besides CL1/F, the index of sensitivity for all of the other four parameters (V1/F, V2/F, CL2/F and k(a)) in tacrolimus PopPK model showed relatively high level and changed fast with the time passing. With the increase of the variance of k(a), its indices of sensitivity increased obviously, associated with significant decrease in sensitivity index for the other parameters, and obvious change in peak time as well. According to the simulation of NONMEM and the comparison among different fitting results, we found that the sampling time points designed according to FAST surpassed the other time points. It suggests that FAST can access the sensitivities of model parameters effectively, and assist the design of clinical sampling times and the construction of PopPK model.

  9. Outcome-Dependent Sampling Design and Inference for Cox's Proportional Hazards Model.

    PubMed

    Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P; Zhou, Haibo

    2016-11-01

    We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study.

  10. Guidelines for Initiating a Research Agenda: Research Design and Dissemination of Results.

    PubMed

    Delost, Maria E; Nadder, Teresa S

    2014-01-01

    Successful research outcomes require selection and implementation of the appropriate research design. A realistic sampling plan appropriate for the design is essential. Qualitative or quantitative methodology may be utilized, depending on the research question and goals. Quantitative research may be experimental where there is an intervention, or nonexperimental, if no intervention is included in the design. Causation can only be established with experimental research. Popular types of nonexperimental research include descriptive and survey research. Research findings may be disseminated via presentations, posters, and publications, such as abstracts and manuscripts.

  11. Colorimetric-Solid Phase Extraction Technology for Water Quality Monitoring: Evaluation of C-SPE and Debubbling Methods in Microgravity

    NASA Technical Reports Server (NTRS)

    Hazen-Bosveld, April; Lipert, Robert J.; Nordling, John; Shih, Chien-Ju; Siperko, Lorraine; Porter, Marc D.; Gazda, Daniel B.; Rutz, Jeff A.; Straub, John E.; Schultz, John R.; hide

    2007-01-01

    Colorimetric-solid phase extraction (C-SPE) is being developed as a method for in-flight monitoring of spacecraft water quality. C-SPE is based on measuring the change in the diffuse reflectance spectrum of indicator disks following exposure to a water sample. Previous microgravity testing has shown that air bubbles suspended in water samples can cause uncertainty in the volume of liquid passed through the disks, leading to errors in the determination of water quality parameter concentrations. We report here the results of a recent series of C-9 microgravity experiments designed to evaluate manual manipulation as a means to collect bubble-free water samples of specified volumes from water sample bags containing up to 47% air. The effectiveness of manual manipulation was verified by comparing the results from C-SPE analyses of silver(I) and iodine performed in-flight using samples collected and debubbled in microgravity to those performed on-ground using bubble-free samples. The ground and flight results showed excellent agreement, demonstrating that manual manipulation is an effective means for collecting bubble-free water samples in microgravity.

  12. Zero-G Workstation Design

    NASA Technical Reports Server (NTRS)

    Gundersen, R. T.; Bond, R. L.

    1976-01-01

    Zero-g workstations were designed throughout manned spaceflight, based on different criteria and requirements for different programs. The history of design of these workstations is presented along with a thorough evaluation of selected Skylab workstations (the best zero-g experience available on the subject). The results were applied to on-going and future programs, with special emphasis on the correlation of neutral body posture in zero-g to workstation design. Where selected samples of shuttle orbiter workstations are shown as currently designed and compared to experience gained during prior programs in terms of man machine interface design, the evaluations were done in a generic sense to show the methods of applying evaluative techniques.

  13. A novel personal air sampling device for collecting volatile organic compounds: a comparison to charcoal tubes and diffusive badges.

    PubMed

    Rossner, Alan; Farant, Jean-Pierre

    2004-02-01

    Evacuated canisters have been used for many years to collect ambient air samples for gases and vapors. Recently, significant interest has arisen in using evacuated canisters for personal breathing zone sampling as an alternative to sorbent sampling. A novel flow control device was designed and built at McGill University. The flow control device was designed to provide a very low flow rate, <0.5 mL/min, to allow a sample to be collected over an extended period of time. Previous experiments run at McGill have shown agreement between the mathematical and empirical models to predict flow rate. The flow control device combined with an evacuated canister (capillary flow control-canister) was used in a series of experiments to evaluate its performance against charcoal tubes and diffusive badges. Air samples of six volatile organic compounds were simultaneously collected in a chamber using the capillary flow control-canister, charcoal tubes, and diffusive badges. Five different concentrations of the six volatile organic compounds were evaluated. The results from the three sampling devices were compared to each other and to concentration values obtained using an online gas chromatograph (GC). Eighty-four samples of each method were collected for each of the six chemicals. Results indicate that the capillary flow control-canister device compares quite favorably to the online GC and to the charcoal tubes, p > 0.05 for most of the tests. The capillary flow control-canister was found to be more accurate for the compounds evaluated, easier to use, and easier to analyze than charcoal tubes and passive dosimeter badges.

  14. Mixing problems in using indicators for measuring regional blood flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ushioda, E.; Nuwayhid, B.; Tabsh, K.

    A basic requirement for using indicators for measuring blood flow is adequate mixing of the indicator with blood prior to sampling the site. This requirement has been met by depositing the indicator in the heart and sampling from an artery. Recently, authors have injected microspheres into veins and sampled from venous sites. The present studies were designed to investigate the mixing problems in sheep and rabbits by means of Cardio-Green and labeled microspheres. The indicators were injected at different points in the circulatory system, and blood was sampled at different levels of the venous and arterial systems. Results show themore » following: (a) When an indicator of small molecular size (Cardio-Green) is allowed to pass through the heart chambers, adequate mixing is achieved, yielding accurate and reproducible results. (b) When any indicator (Cardio-Green or microspheres) is injected into veins, and sampling is done at any point in the venous system, mixing is inadequate, yielding flow results which are inconsistent and erratic. (c) For an indicator or large molecular size (microspheres), injecting into the left side of the heart and sampling from arterial sites yield accurate and reproducible results regardless of whether blood is sampled continuously or intermittently.« less

  15. Design tradeoffs in long-term research for stream salamanders

    USGS Publications Warehouse

    Brand, Adrianne B,; Grant, Evan H. Campbell

    2017-01-01

    Long-term research programs can benefit from early and periodic evaluation of their ability to meet stated objectives. In particular, consideration of the spatial allocation of effort is key. We sampled 4 species of stream salamanders intensively for 2 years (2010–2011) in the Chesapeake and Ohio Canal National Historical Park, Maryland, USA to evaluate alternative distributions of sampling locations within stream networks, and then evaluated via simulation the ability of multiple survey designs to detect declines in occupancy and to estimate dynamic parameters (colonization, extinction) over 5 years for 2 species. We expected that fine-scale microhabitat variables (e.g., cobble, detritus) would be the strongest determinants of occupancy for each of the 4 species; however, we found greater support for all species for models including variables describing position within the stream network, stream size, or stream microhabitat. A monitoring design focused on headwater sections had greater power to detect changes in occupancy and the dynamic parameters in each of 3 scenarios for the dusky salamander (Desmognathus fuscus) and red salamander (Pseudotriton ruber). Results for transect length were more variable, but across all species and scenarios, 25-m transects are most suitable as a balance between maximizing detection probability and describing colonization and extinction. These results inform sampling design and provide a general framework for setting appropriate goals, effort, and duration in the initial planning stages of research programs on stream salamanders in the eastern United States.

  16. Sample size calculations for cluster randomised crossover trials in Australian and New Zealand intensive care research.

    PubMed

    Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B

    2018-06-01

    The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.

  17. Evaluation of three-dimensional virtual perception of garments

    NASA Astrophysics Data System (ADS)

    Aydoğdu, G.; Yeşilpinar, S.; Erdem, D.

    2017-10-01

    In recent years, three-dimensional design, dressing and simulation programs came into prominence in the textile industry. By these programs, the need to produce clothing samples for every design in design process has been eliminated. Clothing fit, design, pattern, fabric and accessory details and fabric drape features can be evaluated easily. Also, body size of virtual mannequin can be adjusted so more realistic simulations can be created. Moreover, three-dimensional virtual garment images created by these programs can be used while presenting the product to end-user instead of two-dimensional photograph images. In this study, a survey was carried out to investigate the visual perception of consumers. The survey was conducted for three different garment types, separately. Questions about gender, profession etc. was asked to the participants and expected them to compare real samples and artworks or three-dimensional virtual images of garments. When survey results were analyzed statistically, it is seen that demographic situation of participants does not affect visual perception and three-dimensional virtual garment images reflect the real sample characteristics better than artworks for each garment type. Also, it is reported that there is no perception difference depending on garment type between t-shirt, sweatshirt and tracksuit bottom.

  18. Well installation, single-well testing, and particle-size analysis for selected sites in and near the Lost Creek Designated Ground Water Basin, north-central Colorado, 2003-2004

    USGS Publications Warehouse

    Beck, Jennifer A.; Paschke, Suzanne S.; Arnold, L. Rick

    2011-01-01

    This report describes results from a groundwater data-collection program completed in 2003-2004 by the U.S. Geological Survey in support of the South Platte Decision Support System and in cooperation with the Colorado Water Conservation Board. Two monitoring wells were installed adjacent to existing water-table monitoring wells. These wells were installed as well pairs with existing wells to characterize the hydraulic properties of the alluvial aquifer and shallow Denver Formation sandstone aquifer in and near the Lost Creek Designated Ground Water Basin. Single-well tests were performed in the 2 newly installed wells and 12 selected existing monitoring wells. Sediment particle size was analyzed for samples collected from the screened interval depths of each of the 14 wells. Hydraulic-conductivity and transmissivity values were calculated after the completion of single-well tests on each of the selected wells. Recovering water-level data from the single-well tests were analyzed using the Bouwer and Rice method because test data most closely resembled those obtained from traditional slug tests. Results from the single-well test analyses for the alluvial aquifer indicate a median hydraulic-conductivity value of 3.8 x 10-5 feet per second and geometric mean hydraulic-conductivity value of 3.4 x 10-5 feet per second. Median and geometric mean transmissivity values in the alluvial aquifer were 8.6 x 10-4 feet squared per second and 4.9 x 10-4 feet squared per second, respectively. Single-well test results for the shallow Denver Formation sandstone aquifer indicate a median hydraulic-conductivity value of 5.4 x 10-6 feet per second and geometric mean value of 4.9 x 10-6 feet per second. Median and geometric mean transmissivity values for the shallow Denver Formation sandstone aquifer were 4.0 x 10-5 feet squared per second and 5.9 x 10-5 feet squared per second, respectively. Hydraulic-conductivity values for the alluvial aquifer in and near the Lost Creek Designated Ground Water Basin generally were greater than hydraulic-conductivity values for the Denver Formation sandstone aquifer and less than hydraulic-conductivity values for the alluvial aquifer along the main stem of the South Platte River Basin reported by previous studies. Particle sizes were analyzed for a total of 14 samples of material representative of the screened interval in each of the 14 wells tested in this study. Of the 14 samples collected, 8 samples represent the alluvial aquifer and 6 samples represent the Denver Formation sandstone aquifer in and near the Lost Creek Designated Ground Water Basin. The sampled alluvial aquifer material generally contained a greater percentage of large particles (larger than 0.5 mm) than the sampled sandstone aquifer material. Alternatively, the sampled sandstone aquifer material generally contained a greater percentage of fine particles (smaller than 0.5 mm) than the sampled alluvial aquifer material consistent with the finding that the alluvial aquifer is more conductive than the sandstone aquifer in the vicinity of the Lost Creek Designated Ground Water Basin.

  19. 40 CFR 1065.145 - Gaseous and PM probes, transfer lines, and sampling system components.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... measuring sample flows by designing a passive sampling system that meets the following requirements: (A) The... number of bends, and have no filters. (B) If probes are designed such that they are sensitive to stack... design and construction. Use sample probes with inside surfaces of 300 series stainless steel or, for raw...

  20. 40 CFR 1065.145 - Gaseous and PM probes, transfer lines, and sampling system components.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... measuring sample flows by designing a passive sampling system that meets the following requirements: (A) The... number of bends, and have no filters. (B) If probes are designed such that they are sensitive to stack... design and construction. Use sample probes with inside surfaces of 300 series stainless steel or, for raw...

  1. 40 CFR 1065.145 - Gaseous and PM probes, transfer lines, and sampling system components.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... measuring sample flows by designing a passive sampling system that meets the following requirements: (A) The... number of bends, and have no filters. (B) If probes are designed such that they are sensitive to stack... design and construction. Use sample probes with inside surfaces of 300 series stainless steel or, for raw...

  2. 40 CFR 1065.145 - Gaseous and PM probes, transfer lines, and sampling system components.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... measuring sample flows by designing a passive sampling system that meets the following requirements: (A) The... number of bends, and have no filters. (B) If probes are designed such that they are sensitive to stack... design and construction. Use sample probes with inside surfaces of 300 series stainless steel or, for raw...

  3. 40 CFR 1065.145 - Gaseous and PM probes, transfer lines, and sampling system components.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... measuring sample flows by designing a passive sampling system that meets the following requirements: (A) The... number of bends, and have no filters. (B) If probes are designed such that they are sensitive to stack... design and construction. Use sample probes with inside surfaces of 300 series stainless steel or, for raw...

  4. Description of sampling designs using a comprehensive data structure

    Treesearch

    John C. Byrne; Albert R. Stage

    1988-01-01

    Maintaining permanent plot data with different sampling designs over long periods within an organization, as well as sharing such information between organizations, requires that common standards be used. A data structure for the description of the sampling design within a stand is proposed. It is based on the definition of subpopulations of trees sampled, the rules...

  5. A Mixed Methods Investigation of Mixed Methods Sampling Designs in Social and Health Science Research

    ERIC Educational Resources Information Center

    Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.; Jiao, Qun G.

    2007-01-01

    A sequential design utilizing identical samples was used to classify mixed methods studies via a two-dimensional model, wherein sampling designs were grouped according to the time orientation of each study's components and the relationship of the qualitative and quantitative samples. A quantitative analysis of 121 studies representing nine fields…

  6. Modular Biopower System Providing Combined Heat and Power for DoD Installations

    DTIC Science & Technology

    2013-12-01

    Cycle Cost evaluation using the experimental results of the 6-month field demonstration and the system’s projected cost and performance for the...34 5.6 SAMPLING RESULTS ...premises, which resulted in a significant program delay. After a short period of operation, the custom-designed engine developed mechanical

  7. Modification of infant hypothyroidism and phenylketonuria screening program using electronic tools.

    PubMed

    Taheri, Behjat; Haddadpoor, Asefeh; Mirkhalafzadeh, Mahmood; Mazroei, Fariba; Aghdak, Pezhman; Nasri, Mehran; Bahrami, Gholamreza

    2017-01-01

    Congenital hypothyroidism and phenylketonuria (PKU) are the most common cause for preventable mental retardation in infants worldwide. Timely diagnosis and treatment of these disorders can have lasting effects on the mental development of newborns. However, there are several problems at different stages of screening programs that along with imposing heavy costs can reduce the precision of the screening, increasing the chance of undiagnosed cases which in turn can have damaging consequences for the society. Therefore, given these problems and the importance of information systems in facilitating the management and improving the quality of health care the aim of this study was to improve the screening process of hypothyroidism and PKU in infants with the help of electronic resources. The current study is a qualitative, action research designed to improve the quality of screening, services, performance, implementation effectiveness, and management of hypothyroidism and PKU screening program in Isfahan province. To this end, web-based software was designed. Programming was carried out using Delphi.net software and used SQL Server 2008 for database management. Given the weaknesses, problems, and limitations of hypothyroidism and PKU screening program, and the importance of these diseases in a national scale, this study resulted in design of hypothyroidism and PKU screening software for infants in Isfahan province. The inputs and outputs of the software were designed in three levels including Health Care Centers in charge of the screening program, provincial reference lab, and health and treatment network of Isfahan province. Immediate registration of sample data at the time and location of sampling, providing the provincial reference Laboratory and Health Centers of different eparchies with the ability to instantly observe, monitor, and follow-up on the samples at any moment, online verification of samples by reference lab, creating a daily schedule for reference lab, and receiving of the results from analysis equipment; and entering the results into the database without the need for user input are among the features of this software. The implementation of hypothyroidism screening software led to an increase in the quality and efficiency of the screening program; minimized the risk of human error in the process and solved many of the previous limitations of the screening program which were the main goals for implementation of this software. The implementation of this software also resulted in improvement in precision and quality of services provided for these two diseases and better accuracy and precision for data inputs by providing the possibility of entering the sample data at the place and time of sampling which then resulted in the possibility of management based on precise data and also helped develop a comprehensive database and improved the satisfaction of service recipients.

  8. Computer method for design of acoustic liners for turbofan engines

    NASA Technical Reports Server (NTRS)

    Minner, G. L.; Rice, E. J.

    1976-01-01

    A design package is presented for the specification of acoustic liners for turbofans. An estimate of the noise generation was made based on modifications of existing noise correlations, for which the inputs are basic fan aerodynamic design variables. The method does not predict multiple pure tones. A target attenuation spectrum was calculated which was the difference between the estimated generation spectrum and a flat annoyance-weighted goal attenuated spectrum. The target spectrum was combined with a knowledge of acoustic liner performance as a function of the liner design variables to specify the acoustic design. The liner design method at present is limited to annular duct configurations. The detailed structure of the liner was specified by combining the required impedance (which is a result of the previous step) with a mathematical model relating impedance to the detailed structure. The design procedure was developed for a liner constructed of perforated sheet placed over honeycomb backing cavities. A sample calculation was carried through in order to demonstrate the design procedure, and experimental results presented show good agreement with the calculated results of the method.

  9. Trace element analysis of coal by neutron activation.

    NASA Technical Reports Server (NTRS)

    Sheibley, D. W.

    1973-01-01

    The irradiation, counting, and data reduction scheme is described for an analysis capability of 1000 samples per year. Up to 56 elements are reported on each sample. The precision and accuracy of the method are shown for 25 elements designated as hazardous by the Environmental Protection Agency (EPA). The interference corrections for selenium and ytterbium on mercury and ytterbium on selenium are described. The effect of bromine and antimony on the determination of arsenic is also mentioned. The use of factorial design techniques to evaluate interferences in the determination of mercury, selenium, and arsenic is shown. Some typical trace element results for coal, fly ash, and bottom ash are given.

  10. Trace element analysis of coal by neutron activation

    NASA Technical Reports Server (NTRS)

    Sheibley, D. W.

    1973-01-01

    The irradiation, counting, and data reduction scheme is described for an analysis capability of 1000 samples per year. Up to 56 elements are reported on each sample. The precision and accuracy of the method are shown for 25 elements designated as hazardous by the Environmental Protection Agency (EPA). The interference corrections for selenium and ytterbium on mercury and ytterbium on selenium are described. The effect of bromine and antimony on the determination of arsenic is also mentioned. The use of factorial design techniques to evaluate interferences in the determination of mercury, selenium, and arsenic is shown. Some typical trace element results for coal, fly ash, and bottom ash are given.

  11. Improving the sensitivity and accuracy of gamma activation analysis for the rapid determination of gold in mineral ores.

    PubMed

    Tickner, James; Ganly, Brianna; Lovric, Bojan; O'Dwyer, Joel

    2017-04-01

    Mining companies rely on chemical analysis methods to determine concentrations of gold in mineral ore samples. As gold is often mined commercially at concentrations around 1 part-per-million, it is necessary for any analysis method to provide good sensitivity as well as high absolute accuracy. We describe work to improve both the sensitivity and accuracy of the gamma activation analysis (GAA) method for gold. We present analysis results for several suites of ore samples and discuss the design of a GAA facility designed to replace conventional chemical assay in industrial applications. Copyright © 2017. Published by Elsevier Ltd.

  12. 23 CFR Appendix A to Part 1340 - Sample Design

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... OBSERVATIONAL SURVEYS OF SEAT BELT USE Pt. 1340, App. A Appendix A to Part 1340—Sample Design Following is a description of a sample design that meets the final survey guidelines and, based upon NHTSA's experience in.... This information is intended only as an example of a complying survey design and to provide guidance...

  13. A Facility Specialist Model for Improving Retention of Nursing Home Staff: Results from a Randomized, Controlled Study

    ERIC Educational Resources Information Center

    Pillemer, Karl; Meador, Rhoda; Henderson, Charles, Jr.; Robison, Julie; Hegeman, Carol; Graham, Edwin; Schultz, Leslie

    2008-01-01

    Purpose: This article reports on a randomized, controlled intervention study designed to reduce employee turnover by creating a retention specialist position in nursing homes. Design and Methods: We collected data three times over a 1-year period in 30 nursing homes, sampled in stratified random manner from facilities in New York State and…

  14. Teachers' Improvisation of Instructional Materials for Nigerian Home Economics Curriculum Delivery: Challenges and Strategies

    ERIC Educational Resources Information Center

    Olibie, Eyiuche Ifeoma; Nwabunwanne, Chinyere; Ezenwanne, Dorothy Nkem

    2013-01-01

    This study was designed to ascertain the challenges of improvising instructional materials by Home Economics teachers at the Upper Basic education level in Nigeria, and as a result identify strategies for enhancing improvisation. The study used survey research design based on two research questions. The sample was four hundred and thirty-one Home…

  15. Comprehensive School Counseling Programs and Student Achievement Outcomes: A Comparative Analysis of Ramp versus Non-Ramp Schools

    ERIC Educational Resources Information Center

    Wilkerson, Kevin; Perusse, Rachelle; Hughes, Ashley

    2013-01-01

    This study compares school-wide Annual Yearly Progress (AYP) results in Indiana schools earning the Recognized ASCA Model Program (RAMP) designation (n = 75) with a sample of control schools stratified by level and locale (n = 226). K-12 schools earning the RAMP designation in 2007, 2008, and 2009 comprise the experimental group. Findings indicate…

  16. Fast and robust control of nanopositioning systems: Performance limits enabled by field programmable analog arrays.

    PubMed

    Baranwal, Mayank; Gorugantu, Ram S; Salapaka, Srinivasa M

    2015-08-01

    This paper aims at control design and its implementation for robust high-bandwidth precision (nanoscale) positioning systems. Even though modern model-based control theoretic designs for robust broadband high-resolution positioning have enabled orders of magnitude improvement in performance over existing model independent designs, their scope is severely limited by the inefficacies of digital implementation of the control designs. High-order control laws that result from model-based designs typically have to be approximated with reduced-order systems to facilitate digital implementation. Digital systems, even those that have very high sampling frequencies, provide low effective control bandwidth when implementing high-order systems. In this context, field programmable analog arrays (FPAAs) provide a good alternative to the use of digital-logic based processors since they enable very high implementation speeds, moreover with cheaper resources. The superior flexibility of digital systems in terms of the implementable mathematical and logical functions does not give significant edge over FPAAs when implementing linear dynamic control laws. In this paper, we pose the control design objectives for positioning systems in different configurations as optimal control problems and demonstrate significant improvements in performance when the resulting control laws are applied using FPAAs as opposed to their digital counterparts. An improvement of over 200% in positioning bandwidth is achieved over an earlier digital signal processor (DSP) based implementation for the same system and same control design, even when for the DSP-based system, the sampling frequency is about 100 times the desired positioning bandwidth.

  17. Experimental design matters for statistical analysis: how to handle blocking.

    PubMed

    Jensen, Signe M; Schaarschmidt, Frank; Onofri, Andrea; Ritz, Christian

    2018-03-01

    Nowadays, evaluation of the effects of pesticides often relies on experimental designs that involve multiple concentrations of the pesticide of interest or multiple pesticides at specific comparable concentrations and, possibly, secondary factors of interest. Unfortunately, the experimental design is often more or less neglected when analysing data. Two data examples were analysed using different modelling strategies. First, in a randomized complete block design, mean heights of maize treated with a herbicide and one of several adjuvants were compared. Second, translocation of an insecticide applied to maize as a seed treatment was evaluated using incomplete data from an unbalanced design with several layers of hierarchical sampling. Extensive simulations were carried out to further substantiate the effects of different modelling strategies. It was shown that results from suboptimal approaches (two-sample t-tests and ordinary ANOVA assuming independent observations) may be both quantitatively and qualitatively different from the results obtained using an appropriate linear mixed model. The simulations demonstrated that the different approaches may lead to differences in coverage percentages of confidence intervals and type 1 error rates, confirming that misleading conclusions can easily happen when an inappropriate statistical approach is chosen. To ensure that experimental data are summarized appropriately, avoiding misleading conclusions, the experimental design should duly be reflected in the choice of statistical approaches and models. We recommend that author guidelines should explicitly point out that authors need to indicate how the statistical analysis reflects the experimental design. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  18. Design review report for rotary mode core sample truck (RMCST) modifications for flammable gas tanks, preliminary design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corbett, J.E.

    1996-02-01

    This report documents the completion of a preliminary design review for the Rotary Mode Core Sample Truck (RMCST) modifications for flammable gas tanks. The RMCST modifications are intended to support core sampling operations in waste tanks requiring flammable gas controls. The objective of this review was to validate basic design assumptions and concepts to support a path forward leading to a final design. The conclusion reached by the review committee was that the design was acceptable and efforts should continue toward a final design review.

  19. Simulating future uncertainty to guide the selection of survey designs for long-term monitoring

    USGS Publications Warehouse

    Garman, Steven L.; Schweiger, E. William; Manier, Daniel J.; Gitzen, Robert A.; Millspaugh, Joshua J.; Cooper, Andrew B.; Licht, Daniel S.

    2012-01-01

    A goal of environmental monitoring is to provide sound information on the status and trends of natural resources (Messer et al. 1991, Theobald et al. 2007, Fancy et al. 2009). When monitoring observations are acquired by measuring a subset of the population of interest, probability sampling as part of a well-constructed survey design provides the most reliable and legally defensible approach to achieve this goal (Cochran 1977, Olsen et al. 1999, Schreuder et al. 2004; see Chapters 2, 5, 6, 7). Previous works have described the fundamentals of sample surveys (e.g. Hansen et al. 1953, Kish 1965). Interest in survey designs and monitoring over the past 15 years has led to extensive evaluations and new developments of sample selection methods (Stevens and Olsen 2004), of strategies for allocating sample units in space and time (Urquhart et al. 1993, Overton and Stehman 1996, Urquhart and Kincaid 1999), and of estimation (Lesser and Overton 1994, Overton and Stehman 1995) and variance properties (Larsen et al. 1995, Stevens and Olsen 2003) of survey designs. Carefully planned, “scientific” (Chapter 5) survey designs have become a standard in contemporary monitoring of natural resources. Based on our experience with the long-term monitoring program of the US National Park Service (NPS; Fancy et al. 2009; Chapters 16, 22), operational survey designs tend to be selected using the following procedures. For a monitoring indicator (i.e. variable or response), a minimum detectable trend requirement is specified, based on the minimum level of change that would result in meaningful change (e.g. degradation). A probability of detecting this trend (statistical power) and an acceptable level of uncertainty (Type I error; see Chapter 2) within a specified time frame (e.g. 10 years) are specified to ensure timely detection. Explicit statements of the minimum detectable trend, the time frame for detecting the minimum trend, power, and acceptable probability of Type I error (α) collectively form the quantitative sampling objective.

  20. SiC-CMC-Zircaloy-4 Nuclear Fuel Cladding Performance during 4-Point Tubular Bend Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    IJ van Rooyen; WR Lloyd; TL Trowbridge

    2013-09-01

    The U.S. Department of Energy Office of Nuclear Energy (DOE NE) established the Light Water Reactor Sustainability (LWRS) program to develop technologies and other solutions to improve the reliability, sustain the safety, and extend the life of current reactors. The Advanced LWR Nuclear Fuel Development Pathway in the LWRS program encompasses strategic research focused on improving reactor core economics and safety margins through the development of an advanced fuel cladding system. Recent investigations of potential options for “accident tolerant” nuclear fuel systems point to the potential benefits of silicon carbide (SiC) cladding. One of the proposed SiC-based fuel cladding designsmore » being investigated incorporates a SiC ceramic matrix composite (CMC) as a structural material supplementing an internal Zircaloy-4 (Zr-4) liner tube, referred to as the hybrid clad design. Characterization of the advanced cladding designs will include a number of out-of-pile (nonnuclear) tests, followed by in-pile irradiation testing of the most promising designs. One of the out-of-pile characterization tests provides measurement of the mechanical properties of the cladding tube using four point bend testing. Although the material properties of the different subsystems (materials) will be determined separately, in this paper we present results of 4-point bending tests performed on fully assembled hybrid cladding tube mock-ups, an assembled Zr-4 cladding tube mock-up as a standard and initial testing results on bare SiC-CMC sleeves to assist in defining design parameters. The hybrid mock-up samples incorporated SiC-CMC sleeves fabricated with 7 polymer impregnation and pyrolysis (PIP) cycles. To provide comparative information; both 1- and 2-ply braided SiC-CMC sleeves were used in this development study. Preliminary stress simulations were performed using the BISON nuclear fuel performance code to show the stress distribution differences for varying lengths between loading points and clad configurations. The 2-ply sleeve samples show a higher bend momentum compared to those of the 1-ply sleeve samples. This is applicable to both the hybrid mock-up and bare SiC-CMC sleeve samples. Comparatively both the 1- and 2-ply hybrid mock-up samples showed a higher bend stiffness and strength compared with the standard Zr-4 mock-up sample. The characterization of the hybrid mock-up samples showed signs of distress and preliminary signs of fraying at the protective Zr-4 sleeve areas for the 1-ply SiC-CMC sleeve. In addition, the microstructure of the SiC matrix near the cracks at the region of highest compressive bending strain shows significant cracking and flaking. The 2-ply SiC-CMC sleeve samples showed a more bonded, cohesive SiC matrix structure. This cracking and fraying causes concern for increased fretting during the actual use of the design. Tomography was proven as a successful tool to identify open porosity during pre-test characterization. Although there is currently insufficient data to make conclusive statements regarding the overall merit of the hybrid cladding design, preliminary characterization of this novel design has been demonstrated.« less

  1. Application of Plackett-Burman and Doehlert designs for optimization of selenium analysis in plasma with electrothermal atomic absorption spectrometry.

    PubMed

    El Ati-Hellal, Myriam; Hellal, Fayçal; Hedhili, Abderrazek

    2014-10-01

    The aim of this study was the optimization of selenium determination in plasma samples with electrothermal atomic absorption spectrometry using experimental design methodology. 11 variables being able to influence selenium analysis in human blood plasma by electrothermal atomic absorption spectrometry (ETAAS) were evaluated with Plackett-Burman experimental design. These factors were selected from sample preparation, furnace program and chemical modification steps. Both absorbance and background signals were chosen as responses in the screening approach. Doehlert design was used for method optimization. Results showed that only ashing temperature has a statistically significant effect on the selected responses. Optimization with Doehlert design allowed the development of a reliable method for selenium analysis with ETAAS. Samples were diluted 1/10 with 0.05% (v/v) TritonX-100+2.5% (v/v) HNO3 solution. Optimized ashing and atomization temperatures for nickel modifier were 1070°C and 2270°C, respectively. A detection limit of 2.1μgL(-1) Se was obtained. Accuracy of the method was checked by the analysis of selenium in Seronorm™ Trace element quality control serum level 1. The developed procedure was applied for the analysis of total selenium in fifteen plasma samples with standard addition method. Concentrations ranged between 24.4 and 64.6μgL(-1), with a mean of 42.6±4.9μgL(-1). The use of experimental designs allowed the development of a cheap and accurate method for selenium analysis in plasma that could be applied routinely in clinical laboratories. Copyright © 2014 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  2. Development and optimization of SPE-HPLC-UV/ELSD for simultaneous determination of nine bioactive components in Shenqi Fuzheng Injection based on Quality by Design principles.

    PubMed

    Wang, Lu; Qu, Haibin

    2016-03-01

    A method combining solid phase extraction, high performance liquid chromatography, and ultraviolet/evaporative light scattering detection (SPE-HPLC-UV/ELSD) was developed according to Quality by Design (QbD) principles and used to assay nine bioactive compounds within a botanical drug, Shenqi Fuzheng Injection. Risk assessment and a Plackett-Burman design were utilized to evaluate the impact of 11 factors on the resolutions and signal-to-noise of chromatographic peaks. Multiple regression and Pareto ranking analysis indicated that the sorbent mass, sample volume, flow rate, column temperature, evaporator temperature, and gas flow rate were statistically significant (p < 0.05) in this procedure. Furthermore, a Box-Behnken design combined with response surface analysis was employed to study the relationships between the quality of SPE-HPLC-UV/ELSD analysis and four significant factors, i.e., flow rate, column temperature, evaporator temperature, and gas flow rate. An analytical design space of SPE-HPLC-UV/ELSD was then constructed by calculated Monte Carlo probability. In the presented approach, the operating parameters of sample preparation, chromatographic separation, and compound detection were investigated simultaneously. Eight terms of method validation, i.e., system-suitability tests, method robustness/ruggedness, sensitivity, precision, repeatability, linearity, accuracy, and stability, were accomplished at a selected working point. These results revealed that the QbD principles were suitable in the development of analytical procedures for samples in complex matrices. Meanwhile, the analytical quality and method robustness were validated by the analytical design space. The presented strategy provides a tutorial on the development of a robust QbD-compliant quantitative method for samples in complex matrices.

  3. An adaptive two-stage sequential design for sampling rare and clustered populations

    USGS Publications Warehouse

    Brown, J.A.; Salehi, M.M.; Moradi, M.; Bell, G.; Smith, D.R.

    2008-01-01

    How to design an efficient large-area survey continues to be an interesting question for ecologists. In sampling large areas, as is common in environmental studies, adaptive sampling can be efficient because it ensures survey effort is targeted to subareas of high interest. In two-stage sampling, higher density primary sample units are usually of more interest than lower density primary units when populations are rare and clustered. Two-stage sequential sampling has been suggested as a method for allocating second stage sample effort among primary units. Here, we suggest a modification: adaptive two-stage sequential sampling. In this method, the adaptive part of the allocation process means the design is more flexible in how much extra effort can be directed to higher-abundance primary units. We discuss how best to design an adaptive two-stage sequential sample. ?? 2008 The Society of Population Ecology and Springer.

  4. Estimation of sample size and testing power (part 5).

    PubMed

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-02-01

    Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.

  5. Off-design performance loss model for radial turbines with pivoting, variable-area stators

    NASA Technical Reports Server (NTRS)

    Meitner, P. L.; Glassman, A. J.

    1980-01-01

    An off-design performance loss model was developed for variable stator (pivoted vane), radial turbines through analytical modeling and experimental data analysis. Stator loss is determined by a viscous loss model; stator vane end-clearance leakage effects are determined by a clearance flow model. Rotor loss coefficient were obtained by analyzing the experimental data from a turbine rotor previously tested with six stators having throat areas from 20 to 144 percent of design area and were correlated with stator-to-rotor throat area ratio. An incidence loss model was selected to obtain best agreement with experimental results. Predicted turbine performance is compared with experimental results for the design rotor as well as with results for extended and cutback versions of the rotor. Sample calculations were made to show the effects of stator vane end-clearance leakage.

  6. Stratified Sampling Design Based on Data Mining

    PubMed Central

    Kim, Yeonkook J.; Oh, Yoonhwan; Park, Sunghoon; Cho, Sungzoon

    2013-01-01

    Objectives To explore classification rules based on data mining methodologies which are to be used in defining strata in stratified sampling of healthcare providers with improved sampling efficiency. Methods We performed k-means clustering to group providers with similar characteristics, then, constructed decision trees on cluster labels to generate stratification rules. We assessed the variance explained by the stratification proposed in this study and by conventional stratification to evaluate the performance of the sampling design. We constructed a study database from health insurance claims data and providers' profile data made available to this study by the Health Insurance Review and Assessment Service of South Korea, and population data from Statistics Korea. From our database, we used the data for single specialty clinics or hospitals in two specialties, general surgery and ophthalmology, for the year 2011 in this study. Results Data mining resulted in five strata in general surgery with two stratification variables, the number of inpatients per specialist and population density of provider location, and five strata in ophthalmology with two stratification variables, the number of inpatients per specialist and number of beds. The percentages of variance in annual changes in the productivity of specialists explained by the stratification in general surgery and ophthalmology were 22% and 8%, respectively, whereas conventional stratification by the type of provider location and number of beds explained 2% and 0.2% of variance, respectively. Conclusions This study demonstrated that data mining methods can be used in designing efficient stratified sampling with variables readily available to the insurer and government; it offers an alternative to the existing stratification method that is widely used in healthcare provider surveys in South Korea. PMID:24175117

  7. Understanding the Links Between Self-Report Emotional Intelligence and Suicide Risk: Does Psychological Distress Mediate This Relationship Across Time and Samples?

    PubMed Central

    Mérida-López, Sergio; Extremera, Natalio; Rey, Lourdes

    2018-01-01

    Objective: In the last decades, increasing attention has been paid to examining psychological resources that might contribute to our understanding of suicide risk. Although Emotional Intelligence (EI) is one dimension that has been linked with decreased suicidal ideation and behaviors, we detected several gaps in the literature in this area regarding the research designs and samples involved. In this research, we aimed to test a mediator model considering self-report EI, psychological distress and suicide risk across samples adopting both cross-sectional and prospective designs in two independent studies. Method: In Study 1, our purpose was to examine the potential role of psychological distress as a mediator in the relationship between self-report EI and suicide risk in a community sample comprised of 438 adults (270 women; mean age: 33.21 years). In Study 2, we sought to examine the proposed mediator model considering a 2-month prospective design in a sample of college students (n = 330 in T1; n = 311 in T2; 264 women; mean age: 22.22 years). Results: In Study 1, we found that psychological distress partially mediated the effect of self-report EI on suicide risk. More interestingly, findings from Study 2 showed that psychological distress fully mediated the relationship between self-report EI and suicide risk at Time 2. Conclusion: These results point out the role of psychological distress as a mediator in the association between self-report EI and suicide risk. These findings suggest an underlying process by which self-report EI may act as a protective factor against suicidal ideation and behaviors. In line with the limitations of our work, plausible avenues for future research and interventions are discussed. PMID:29867607

  8. Structure and properties of clinical coralline implants measured via 3D imaging and analysis.

    PubMed

    Knackstedt, Mark Alexander; Arns, Christoph H; Senden, Tim J; Gross, Karlis

    2006-05-01

    The development and design of advanced porous materials for biomedical applications requires a thorough understanding of how material structure impacts on mechanical and transport properties. This paper illustrates a 3D imaging and analysis study of two clinically proven coral bone graft samples (Porites and Goniopora). Images are obtained from X-ray micro-computed tomography (micro-CT) at a resolution of 16.8 microm. A visual comparison of the two images shows very different structure; Porites has a homogeneous structure and consistent pore size while Goniopora has a bimodal pore size and a strongly disordered structure. A number of 3D structural characteristics are measured directly on the images including pore volume-to-surface-area, pore and solid size distributions, chord length measurements and tortuosity. Computational results made directly on the digitized tomographic images are presented for the permeability, diffusivity and elastic modulus of the coral samples. The results allow one to quantify differences between the two samples. 3D digital analysis can provide a more thorough assessment of biomaterial structure including the pore wall thickness, local flow, mechanical properties and diffusion pathways. We discuss the implications of these results to the development of optimal scaffold design for tissue ingrowth.

  9. Job security at isfahan university of medical sciences: implications on employees and types of contracts.

    PubMed

    Alavi, Seyyed Salman; Alaghemandan, Hamed; Jannatifard, Fereshte

    2013-01-01

    Medical universities are of those organizations that serve many individuals. As a result, the employees who work at medical universities should have adequate job qualifications and requisite conditions for work. Job security is one of these needed conditions. The current study aims to determine the main components of job security among the employees of Isfahan University of Medical Sciences (IUMS). The study had a cross-sectional design. The sample included 300 employees which were selected from the faculties of IUMS. The sample was recruited using quota sampling. First, demographic and Job security questionnaires were completed by each employee. Then, data was analyzed by descriptive methods and ANOVA in SPSS16. The study results showed that there was no significant difference among five subscales of Job security questionnaire and as a result, job security among the employees of IUMS but there was a significant difference in job security among male and female employees and a significant difference in job security based on type of job contract. Lower rate of job security among female employees with temporary job contracts has professional and psychological implication for both females and IUMS which should be considered in designing professional programs of IUMS.

  10. Automated design of genomic Southern blot probes

    PubMed Central

    2010-01-01

    Background Sothern blotting is a DNA analysis technique that has found widespread application in molecular biology. It has been used for gene discovery and mapping and has diagnostic and forensic applications, including mutation detection in patient samples and DNA fingerprinting in criminal investigations. Southern blotting has been employed as the definitive method for detecting transgene integration, and successful homologous recombination in gene targeting experiments. The technique employs a labeled DNA probe to detect a specific DNA sequence in a complex DNA sample that has been separated by restriction-digest and gel electrophoresis. Critically for the technique to succeed the probe must be unique to the target locus so as not to cross-hybridize to other endogenous DNA within the sample. Investigators routinely employ a manual approach to probe design. A genome browser is used to extract DNA sequence from the locus of interest, which is searched against the target genome using a BLAST-like tool. Ideally a single perfect match is obtained to the target, with little cross-reactivity caused by homologous DNA sequence present in the genome and/or repetitive and low-complexity elements in the candidate probe. This is a labor intensive process often requiring several attempts to find a suitable probe for laboratory testing. Results We have written an informatic pipeline to automatically design genomic Sothern blot probes that specifically attempts to optimize the resultant probe, employing a brute-force strategy of generating many candidate probes of acceptable length in the user-specified design window, searching all against the target genome, then scoring and ranking the candidates by uniqueness and repetitive DNA element content. Using these in silico measures we can automatically design probes that we predict to perform as well, or better, than our previous manual designs, while considerably reducing design time. We went on to experimentally validate a number of these automated designs by Southern blotting. The majority of probes we tested performed well confirming our in silico prediction methodology and the general usefulness of the software for automated genomic Southern probe design. Conclusions Software and supplementary information are freely available at: http://www.genes2cognition.org/software/southern_blot PMID:20113467

  11. Performance modeling & simulation of complex systems (A systems engineering design & analysis approach)

    NASA Technical Reports Server (NTRS)

    Hall, Laverne

    1995-01-01

    Modeling of the Multi-mission Image Processing System (MIPS) will be described as an example of the use of a modeling tool to design a distributed system that supports multiple application scenarios. This paper examines: (a) modeling tool selection, capabilities, and operation (namely NETWORK 2.5 by CACl), (b) pointers for building or constructing a model and how the MIPS model was developed, (c) the importance of benchmarking or testing the performance of equipment/subsystems being considered for incorporation the design/architecture, (d) the essential step of model validation and/or calibration using the benchmark results, (e) sample simulation results from the MIPS model, and (f) how modeling and simulation analysis affected the MIPS design process by having a supportive and informative impact.

  12. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    PubMed

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  13. Outcome-Dependent Sampling Design and Inference for Cox’s Proportional Hazards Model

    PubMed Central

    Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P.; Zhou, Haibo

    2016-01-01

    We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study. PMID:28090134

  14. Data quality and feasibility of the Experience Sampling Method across the spectrum of severe psychiatric disorders: a protocol for a systematic review and meta-analysis.

    PubMed

    Vachon, Hugo; Rintala, Aki; Viechtbauer, Wolfgang; Myin-Germeys, Inez

    2018-01-18

    Due to a number of methodological advantages and theoretical considerations, more and more studies in clinical psychology research employ the Experience Sampling Method (ESM) as a data collection technique. Despite this growing interest, the absence of methodological guidelines related to the use of ESM has resulted in a large heterogeneity of designs while the potential effects of the design itself on the response behavior of the participants remain unknown. The objectives of this systematic review are to investigate the associations between the design characteristics and the data quality and feasibility of studies relying on ESM in severe psychiatric disorders. We will search for all published studies using ambulatory assessment with patients suffering from major depressive disorder, bipolar disorder, and psychotic disorder or individuals at high risk for these disorders. Electronic database searches will be performed in PubMed and Web of Science with no restriction on the publication date. Two reviewers will independently screen original studies in a title/abstract phase and a full-text phase based on the inclusion criteria. The information related to the design and sample characteristics, data quality, and feasibility will be extracted. We will provide results in terms of a descriptive synthesis, and when applicable, a meta-analysis of the findings will be conducted. Our results will attempt to highlight how the feasibility and data quality of ambulatory assessment might be related to the methodological characteristics of the study designs in severe psychiatric disorders. We will discuss these associations in different subsamples if sufficient data are available and will examine limitations in the reporting of the methods of ambulatory studies in the current literature. The protocol for this systematic review was registered on PROSPERO (PROSPERO 2017: CRD42017060322 ) and is available in full on the University of York website ( http://www.crd.york.ac.uk/PROSPERO/display_record.asp?ID=CRD42017060322 ).

  15. 16 CFR 1616.4 - Sampling and acceptance procedures.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... specimen to one of the three samples. Test each set of three samples and accept or reject each seam design... all the test criteria of § 1616.3(b), accept the seam design. If one or more of the three additional.... Test the sets of three samples and accept or reject the type of trim and design on the same basis as...

  16. 16 CFR 1616.4 - Sampling and acceptance procedures.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... specimen to one of the three samples. Test each set of three samples and accept or reject each seam design... all the test criteria of § 1616.3(b), accept the seam design. If one or more of the three additional.... Test the sets of three samples and accept or reject the type of trim and design on the same basis as...

  17. 16 CFR 1616.4 - Sampling and acceptance procedures.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... specimen to one of the three samples. Test each set of three samples and accept or reject each seam design... all the test criteria of § 1616.3(b), accept the seam design. If one or more of the three additional.... Test the sets of three samples and accept or reject the type of trim and design on the same basis as...

  18. Green design application on campus to enhance student’s quality of life

    NASA Astrophysics Data System (ADS)

    Tamiami, H.; Khaira, F.; Fachrudin, A.

    2018-02-01

    Green design becomes an important thing to applied in the building. Green building will provide comfortability and enhance Quality of Life (QoL) for the users. The purpose of this research is to analyze how green design application on campus to enhance student’s QoL. This research conducted in three campuses which located in North Sumatera Province, namely Universitas Sumatera Utara (USU), Universitas Negeri Medan (Unimed) and Universitas Medan Area (UMA) which have a lot of vegetation, open space, and multi-mass buildings. This research compared the green design application to QoL from three universities. Green design in this research that become independent variables focus on the energy efficiency and conservation (EEC), indoor health and comfort (IHC) and building environment management (BEM) with dependent variable is QoL. This research uses quantitative methods with questionnaire survey techniques. The population is students from the three universities with the sample of each University is 50 samples. The analysis uses multiple regression analysis. The results show that green design application may enhance QoL of students. The campus should have a good green design application to enhance QoL of students and give them comfortability.

  19. Job Security at Isfahan University of Medical Sciences: Implications on Employees and Types of Contracts

    PubMed Central

    Alavi, Seyyed Salman; Alaghemandan, Hamed; Jannatifard, Fereshte

    2013-01-01

    Introduction: Medical universities are of those organizations that serve many individuals. As a result, the employees who work at medical universities should have adequate job qualifications and requisite conditions for work. Job security is one of these needed conditions. The current study aims to determine the main components of job security among the employees of Isfahan University of Medical Sciences (IUMS). Method and materials: The study had a cross-sectional design. The sample included 300 employees which were selected from the faculties of IUMS. The sample was recruited using quota sampling. First, demographic and Job security questionnaires were completed by each employee. Then, data was analyzed by descriptive methods and ANOVA in SPSS16. Results: The study results showed that there was no significant difference among five subscales of Job security questionnaire and as a result, job security among the employees of IUMS but there was a significant difference in job security among male and female employees and a significant difference in job security based on type of job contract. Discussion: Lower rate of job security among female employees with temporary job contracts has professional and psychological implication for both females and IUMS which should be considered in designing professional programs of IUMS. PMID:23687464

  20. Baseline Design Compliance Matrix for the Rotary Mode Core Sampling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LECHELT, J.A.

    2000-10-17

    The purpose of the design compliance matrix (DCM) is to provide a single-source document of all design requirements associated with the fifteen subsystems that make up the rotary mode core sampling (RMCS) system. It is intended to be the baseline requirement document for the RMCS system and to be used in governing all future design and design verification activities associated with it. This document is the DCM for the RMCS system used on Hanford single-shell radioactive waste storage tanks. This includes the Exhauster System, Rotary Mode Core Sample Trucks, Universal Sampling System, Diesel Generator System, Distribution Trailer, X-Ray Cart System,more » Breathing Air Compressor, Nitrogen Supply Trailer, Casks and Cask Truck, Service Trailer, Core Sampling Riser Equipment, Core Sampling Support Trucks, Foot Clamp, Ramps and Platforms and Purged Camera System. Excluded items are tools such as light plants and light stands. Other items such as the breather inlet filter are covered by a different design baseline. In this case, the inlet breather filter is covered by the Tank Farms Design Compliance Matrix.« less

Top