Song, Yongxin; Li, Mengqi; Pan, Xinxiang; Wang, Qi; Li, Dongqing
2015-02-01
An electrokinetic microfluidic chip is developed to detect and sort target cells by size from human blood samples. Target-cell detection is achieved by a differential resistive pulse sensor (RPS) based on the size difference between the target cell and other cells. Once a target cell is detected, the detected RPS signal will automatically actuate an electromagnetic pump built in a microchannel to push the target cell into a collecting channel. This method was applied to automatically detect and sort A549 cells and T-lymphocytes from a peripheral fingertip blood sample. The viability of A549 cells sorted in the collecting well was verified by Hoechst33342 and propidium iodide staining. The results show that as many as 100 target cells per minute can be sorted out from the sample solution and thus is particularly suitable for sorting very rare target cells, such as circulating tumor cells. The actuation of the electromagnetic valve has no influence on RPS cell detection and the consequent cell-sorting process. The viability of the collected A549 cell is not impacted by the applied electric field when the cell passes the RPS detection area. The device described in this article is simple, automatic, and label-free and has wide applications in size-based rare target cell sorting for medical diagnostics. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Patel, Nitin R; Ankolekar, Suresh
2007-11-30
Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.
Visual search by chimpanzees (Pan): assessment of controlling relations.
Tomonaga, M
1995-01-01
Three experimentally sophisticated chimpanzees (Pan), Akira, Chloe, and Ai, were trained on visual search performance using a modified multiple-alternative matching-to-sample task in which a sample stimulus was followed by the search display containing one target identical to the sample and several uniform distractors (i.e., negative comparison stimuli were identical to each other). After they acquired this task, they were tested for transfer of visual search performance to trials in which the sample was not followed by the uniform search display (odd-item search). Akira showed positive transfer of visual search performance to odd-item search even when the display size (the number of stimulus items in the search display) was small, whereas Chloe and Ai showed a transfer only when the display size was large. Chloe and Ai used some nonrelational cues such as perceptual isolation of the target among uniform distractors (so-called pop-out). In addition to the odd-item search test, various types of probe trials were presented to clarify the controlling relations in multiple-alternative matching to sample. Akira showed a decrement of accuracy as a function of the display size when the search display was nonuniform (i.e., each "distractor" stimulus was not the same), whereas Chloe and Ai showed perfect performance. Furthermore, when the sample was identical to the uniform distractors in the search display, Chloe and Ai never selected an odd-item target, but Akira selected it when the display size was large. These results indicated that Akira's behavior was controlled mainly by relational cues of target-distractor oddity, whereas an identity relation between the sample and the target strongly controlled the performance of Chloe and Ai. PMID:7714449
Lee, Paul H; Tse, Andy C Y
2017-05-01
There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Guerrero, C.; Zornoza, R.; Gómez, I.; Mataix-Solera, J.; Navarro-Pedreño, J.; Mataix-Beneyto, J.; García-Orenes, F.
2009-04-01
Near infrared (NIR) reflectance spectroscopy offers important advantages because is a non-destructive technique, the pre-treatments needed in samples are minimal, and the spectrum of the sample is obtained in less than 1 minute without the needs of chemical reagents. For these reasons, NIR is a fast and cost-effective method. Moreover, NIR allows the analysis of several constituents or parameters simultaneously from the same spectrum once it is obtained. For this, a needed steep is the development of soil spectral libraries (set of samples analysed and scanned) and calibrations (using multivariate techniques). The calibrations should contain the variability of the target site soils in which the calibration is to be used. Many times this premise is not easy to fulfil, especially in libraries recently developed. A classical way to solve this problem is through the repopulation of libraries and the subsequent recalibration of the models. In this work we studied the changes in the accuracy of the predictions as a consequence of the successive addition of samples to repopulation. In general, calibrations with high number of samples and high diversity are desired. But we hypothesized that calibrations with lower quantities of samples (lower size) will absorb more easily the spectral characteristics of the target site. Thus, we suspect that the size of the calibration (model) that will be repopulated could be important. For this reason we also studied this effect in the accuracy of predictions of the repopulated models. In this study we used those spectra of our library which contained data of soil Kjeldahl Nitrogen (NKj) content (near to 1500 samples). First, those spectra from the target site were removed from the spectral library. Then, different quantities of samples of the library were selected (representing the 5, 10, 25, 50, 75 and 100% of the total library). These samples were used to develop calibrations with different sizes (%) of samples. We used partial least squares regression, and leave-one-out cross validation as methods of calibration. Two methods were used to select the different quantities (size of models) of samples: (1) Based on Characteristics of Spectra (BCS), and (2) Based on NKj Values of Samples (BVS). Both methods tried to select representative samples. Each of the calibrations (containing the 5, 10, 25, 50, 75 or 100% of the total samples of the library) was repopulated with samples from the target site and then recalibrated (by leave-one-out cross validation). This procedure was sequential. In each step, 2 samples from the target site were added to the models, and then recalibrated. This process was repeated successively 10 times, being 20 the total number of samples added. A local model was also created with the 20 samples used for repopulation. The repopulated, non-repopulated and local calibrations were used to predict the NKj content in those samples from the target site not included in repopulations. For the measurement of the accuracy of the predictions, the r2, RMSEP and slopes were calculated comparing predicted with analysed NKj values. This scheme was repeated for each of the four target sites studied. In general, scarce differences can be found between results obtained with BCS and BVS models. We observed that the repopulation of models increased the r2 of the predictions in sites 1 and 3. The repopulation caused scarce changes of the r2 of the predictions in sites 2 and 4, maybe due to the high initial values (using non-repopulated models r2 >0.90). As consequence of repopulation, the RMSEP decreased in all the sites except in site 2, where a very low RMESP was obtained before the repopulation (0.4 g×kg-1). The slopes trended to approximate to 1, but this value was reached only in site 4 and after the repopulation with 20 samples. In sites 3 and 4, accurate predictions were obtained using the local models. Predictions obtained with models using similar size of samples (similar %) were averaged with the aim to describe the main patterns. The r2 of predictions obtained with models of higher size were not more accurate than those obtained with models of lower size. After repopulation, the RMSEP of predictions using models with lower sizes (5, 10 and 25% of samples of the library) were lower than RMSEP obtained with higher sizes (75 and 100%), indicating that small models can easily integrate the variability of the soils from the target site. The results suggest that calibrations of small size could be repopulated and "converted" in local calibrations. According to this, we can focus most of the efforts in the obtainment of highly accurate analytical values in a reduced set of samples (including some samples from the target sites). The patterns observed here are in opposition with the idea of global models. These results could encourage the expansion of this technique, because very large data based seems not to be needed. Future studies with very different samples will help to confirm the robustness of the patterns observed. Authors acknowledge to "Bancaja-UMH" for the financial support of the project "NIRPROS".
The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations
NASA Astrophysics Data System (ADS)
Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.
2017-09-01
We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.
MaNGA: Target selection and Optimization
NASA Astrophysics Data System (ADS)
Wake, David
2015-01-01
The 6-year SDSS-IV MaNGA survey will measure spatially resolved spectroscopy for 10,000 nearby galaxies using the Sloan 2.5m telescope and the BOSS spectrographs with a new fiber arrangement consisting of 17 individually deployable IFUs. We present the simultaneous design of the target selection and IFU size distribution to optimally meet our targeting requirements. The requirements for the main samples were to use simple cuts in redshift and magnitude to produce an approximately flat number density of targets as a function of stellar mass, ranging from 1x109 to 1x1011 M⊙, and radial coverage to either 1.5 (Primary sample) or 2.5 (Secondary sample) effective radii, while maximizing S/N and spatial resolution. In addition we constructed a 'Color-Enhanced' sample where we required 25% of the targets to have an approximately flat number density in the color and mass plane. We show how these requirements are met using simple absolute magnitude (and color) dependent redshift cuts applied to an extended version of the NASA Sloan Atlas (NSA), how this determines the distribution of IFU sizes and the resulting properties of the MaNGA sample.
MaNGA: Target selection and Optimization
NASA Astrophysics Data System (ADS)
Wake, David
2016-01-01
The 6-year SDSS-IV MaNGA survey will measure spatially resolved spectroscopy for 10,000 nearby galaxies using the Sloan 2.5m telescope and the BOSS spectrographs with a new fiber arrangement consisting of 17 individually deployable IFUs. We present the simultaneous design of the target selection and IFU size distribution to optimally meet our targeting requirements. The requirements for the main samples were to use simple cuts in redshift and magnitude to produce an approximately flat number density of targets as a function of stellar mass, ranging from 1x109 to 1x1011 M⊙, and radial coverage to either 1.5 (Primary sample) or 2.5 (Secondary sample) effective radii, while maximizing S/N and spatial resolution. In addition we constructed a "Color-Enhanced" sample where we required 25% of the targets to have an approximately flat number density in the color and mass plane. We show how these requirements are met using simple absolute magnitude (and color) dependent redshift cuts applied to an extended version of the NASA Sloan Atlas (NSA), how this determines the distribution of IFU sizes and the resulting properties of the MaNGA sample.
Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.
Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto
2007-07-01
To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.
Lai, Keke; Kelley, Ken
2011-06-01
In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about the magnitude of the population targeted effects. With the goal of obtaining sufficiently narrow confidence intervals for the model parameters of interest, sample size planning methods for SEM are developed from the accuracy in parameter estimation approach. One method plans for the sample size so that the expected confidence interval width is sufficiently narrow. An extended procedure ensures that the obtained confidence interval will be no wider than desired, with some specified degree of assurance. A Monte Carlo simulation study was conducted that verified the effectiveness of the procedures in realistic situations. The methods developed have been implemented in the MBESS package in R so that they can be easily applied by researchers. © 2011 American Psychological Association
Application-Specific Graph Sampling for Frequent Subgraph Mining and Community Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purohit, Sumit; Choudhury, Sutanay; Holder, Lawrence B.
Graph mining is an important data analysis methodology, but struggles as the input graph size increases. The scalability and usability challenges posed by such large graphs make it imperative to sample the input graph and reduce its size. The critical challenge in sampling is to identify the appropriate algorithm to insure the resulting analysis does not suffer heavily from the data reduction. Predicting the expected performance degradation for a given graph and sampling algorithm is also useful. In this paper, we present different sampling approaches for graph mining applications such as Frequent Subgrpah Mining (FSM), and Community Detection (CD). Wemore » explore graph metrics such as PageRank, Triangles, and Diversity to sample a graph and conclude that for heterogeneous graphs Triangles and Diversity perform better than degree based metrics. We also present two new sampling variations for targeted graph mining applications. We present empirical results to show that knowledge of the target application, along with input graph properties can be used to select the best sampling algorithm. We also conclude that performance degradation is an abrupt, rather than gradual phenomena, as the sample size decreases. We present the empirical results to show that the performance degradation follows a logistic function.« less
Hagell, Peter; Westergren, Albert
Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).
Global preamplification simplifies targeted mRNA quantification
Kroneis, Thomas; Jonasson, Emma; Andersson, Daniel; Dolatabadi, Soheila; Ståhlberg, Anders
2017-01-01
The need to perform gene expression profiling using next generation sequencing and quantitative real-time PCR (qPCR) on small sample sizes and single cells is rapidly expanding. However, to analyse few molecules, preamplification is required. Here, we studied global and target-specific preamplification using 96 optimised qPCR assays. To evaluate the preamplification strategies, we monitored the reactions in real-time using SYBR Green I detection chemistry followed by melting curve analysis. Next, we compared yield and reproducibility of global preamplification to that of target-specific preamplification by qPCR using the same amount of total RNA. Global preamplification generated 9.3-fold lower yield and 1.6-fold lower reproducibility than target-specific preamplification. However, the performance of global preamplification is sufficient for most downstream applications and offers several advantages over target-specific preamplification. To demonstrate the potential of global preamplification we analysed the expression of 15 genes in 60 single cells. In conclusion, we show that global preamplification simplifies targeted gene expression profiling of small sample sizes by a flexible workflow. We outline the pros and cons for global preamplification compared to target-specific preamplification. PMID:28332609
Accounting for twin births in sample size calculations for randomised trials.
Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J
2018-05-04
Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.
Pinto, Colin A; Saripella, Kalyan K; Loka, Nikhil C; Neau, Steven H
2018-04-01
Certain issues with the use of particles of chitosan (Ch) cross-linked with tripolyphosphate (TPP) in sustained release formulations include inefficient drug loading, burst drug release, and incomplete drug release. Acetaminophen was added to Ch:TPP particles to test for advantages of drug addition extragranularly over drug addition made during cross-linking. The influences of Ch concentration, Ch:TPP ratio, temperature, ionic strength, and pH were assessed. Design of experiments allowed identification of factors and 2-factor interactions that have significant effects on average particle size and size distribution, yield, zeta potential, and true density of the particles, as well as drug release from the directly compressed tablets. Statistical model equations directed production of a control batch that minimized span, maximized yield, and targeted a t 50 of 90 min (sample A); sample B that differed by targeting a t 50 of 240-300 min to provide sustained release; and sample C that differed from sample B by maximizing span. Sample B maximized yield and provided its targeted t 50 and the smallest average particle size, with the higher zeta potential and the lower span of samples B and C. Extragranular addition of a drug to Ch:TPP particles achieved 100% drug loading, eliminated a burst drug release, and can accomplish complete drug release. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
[A comparison of convenience sampling and purposive sampling].
Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien
2014-06-01
Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shin, Jaejin; Woo, Jong-Hak; Mulchaey, John S.
We perform a comprehensive study of X-ray cavities using a large sample of X-ray targets selected from the Chandra archive. The sample is selected to cover a large dynamic range including galaxy clusters, groups, and individual galaxies. Using β -modeling and unsharp masking techniques, we investigate the presence of X-ray cavities for 133 targets that have sufficient X-ray photons for analysis. We detect 148 X-ray cavities from 69 targets and measure their properties, including cavity size, angle, and distance from the center of the diffuse X-ray gas. We confirm the strong correlation between cavity size and distance from the X-raymore » center similar to previous studies. We find that the detection rates of X-ray cavities are similar among galaxy clusters, groups and individual galaxies, suggesting that the formation mechanism of X-ray cavities is independent of environment.« less
Walters, Stephen J; Bonacho Dos Anjos Henriques-Cadby, Inês; Bortolami, Oscar; Flight, Laura; Hind, Daniel; Jacques, Richard M; Knox, Christopher; Nadin, Ben; Rothwell, Joanne; Surtees, Michael; Julious, Steven A
2017-03-20
Substantial amounts of public funds are invested in health research worldwide. Publicly funded randomised controlled trials (RCTs) often recruit participants at a slower than anticipated rate. Many trials fail to reach their planned sample size within the envisaged trial timescale and trial funding envelope. To review the consent, recruitment and retention rates for single and multicentre randomised control trials funded and published by the UK's National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme. HTA reports of individually randomised single or multicentre RCTs published from the start of 2004 to the end of April 2016 were reviewed. Information was extracted, relating to the trial characteristics, sample size, recruitment and retention by two independent reviewers. Target sample size and whether it was achieved; recruitment rates (number of participants recruited per centre per month) and retention rates (randomised participants retained and assessed with valid primary outcome data). This review identified 151 individually RCTs from 787 NIHR HTA reports. The final recruitment target sample size was achieved in 56% (85/151) of the RCTs and more than 80% of the final target sample size was achieved for 79% of the RCTs (119/151). The median recruitment rate (participants per centre per month) was found to be 0.92 (IQR 0.43-2.79) and the median retention rate (proportion of participants with valid primary outcome data at follow-up) was estimated at 89% (IQR 79-97%). There is considerable variation in the consent, recruitment and retention rates in publicly funded RCTs. Investigators should bear this in mind at the planning stage of their study and not be overly optimistic about their recruitment projections. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Bonacho dos Anjos Henriques-Cadby, Inês; Bortolami, Oscar; Flight, Laura; Hind, Daniel; Knox, Christopher; Nadin, Ben; Rothwell, Joanne; Surtees, Michael; Julious, Steven A
2017-01-01
Background Substantial amounts of public funds are invested in health research worldwide. Publicly funded randomised controlled trials (RCTs) often recruit participants at a slower than anticipated rate. Many trials fail to reach their planned sample size within the envisaged trial timescale and trial funding envelope. Objectives To review the consent, recruitment and retention rates for single and multicentre randomised control trials funded and published by the UK's National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme. Data sources and study selection HTA reports of individually randomised single or multicentre RCTs published from the start of 2004 to the end of April 2016 were reviewed. Data extraction Information was extracted, relating to the trial characteristics, sample size, recruitment and retention by two independent reviewers. Main outcome measures Target sample size and whether it was achieved; recruitment rates (number of participants recruited per centre per month) and retention rates (randomised participants retained and assessed with valid primary outcome data). Results This review identified 151 individually RCTs from 787 NIHR HTA reports. The final recruitment target sample size was achieved in 56% (85/151) of the RCTs and more than 80% of the final target sample size was achieved for 79% of the RCTs (119/151). The median recruitment rate (participants per centre per month) was found to be 0.92 (IQR 0.43–2.79) and the median retention rate (proportion of participants with valid primary outcome data at follow-up) was estimated at 89% (IQR 79–97%). Conclusions There is considerable variation in the consent, recruitment and retention rates in publicly funded RCTs. Investigators should bear this in mind at the planning stage of their study and not be overly optimistic about their recruitment projections. PMID:28320800
Exact tests using two correlated binomial variables in contemporary cancer clinical trials.
Yu, Jihnhee; Kepner, James L; Iyer, Renuka
2009-12-01
New therapy strategies for the treatment of cancer are rapidly emerging because of recent technology advances in genetics and molecular biology. Although newer targeted therapies can improve survival without measurable changes in tumor size, clinical trial conduct has remained nearly unchanged. When potentially efficacious therapies are tested, current clinical trial design and analysis methods may not be suitable for detecting therapeutic effects. We propose an exact method with respect to testing cytostatic cancer treatment using correlated bivariate binomial random variables to simultaneously assess two primary outcomes. The method is easy to implement. It does not increase the sample size over that of the univariate exact test and in most cases reduces the sample size required. Sample size calculations are provided for selected designs.
The Properties of the Massive Star-forming Galaxies with an Outside-in Assembly Mode
NASA Astrophysics Data System (ADS)
Wang, Enci; Kong, Xu; Wang, Huiyuan; Wang, Lixin; Lin, Lin; Gao, Yulong; Liu, Qing
2017-08-01
Previous findings show that massive ({M}* > {10}10 {M}⊙ ) star-forming (SF) galaxies usually have an “inside-out” stellar mass assembly mode. In this paper, we have for the first time selected a sample of 77 massive SF galaxies with an “outside-in” assembly mode (called the “targeted sample”) from the Mapping Nearby Galaxies at the Apache Point Observatory (MaNGA) survey. For comparison, two control samples are constructed from the MaNGA sample matched in stellar mass: a sample of 154 normal SF galaxies and a sample of 62 quiescent galaxies. In contrast to normal SF galaxies, the targeted galaxies appear to be smoother and more bulge-dominated and have a smaller size and higher concentration, star formation rate, and gas-phase metallicity as a whole. However, they have a larger size and lower concentration than quiescent galaxies. Unlike the normal SF sample, the targeted sample exhibits a slightly positive gradient of the 4000 Å break and a pronounced negative gradient of Hα equivalent width. Furthermore, the median surface mass density profile is between those of the normal SF and quiescent samples, indicating that the gas accretion of quiescent galaxies is not likely to be the main approach for the outside-in assembly mode. Our results suggest that the targeted galaxies are likely in the transitional phase from normal SF galaxies to quiescent galaxies, with rapid ongoing central stellar mass assembly (or bulge growth). We discuss several possible formation mechanisms for the outside-in mass assembly mode.
A planar near-field scanning technique for bistatic radar cross section measurements
NASA Technical Reports Server (NTRS)
Tuhela-Reuning, S.; Walton, E. K.
1990-01-01
A progress report on the development of a bistatic radar cross section (RCS) measurement range is presented. A technique using one parabolic reflector and a planar scanning probe antenna is analyzed. The field pattern in the test zone is computed using a spatial array of signal sources. It achieved an illumination pattern with 1 dB amplitude and 15 degree phase ripple over the target zone. The required scan plane size is found to be proportional to the size of the desired test target. Scan plane probe sample spacing can be increased beyond the Nyquist lambda/2 limit permitting constant probe sample spacing over a range of frequencies.
Hislop, Jenni; Adewuyi, Temitope E; Vale, Luke D; Harrild, Kirsten; Fraser, Cynthia; Gurung, Tara; Altman, Douglas G; Briggs, Andrew H; Fayers, Peter; Ramsay, Craig R; Norrie, John D; Harvey, Ian M; Buckley, Brian; Cook, Jonathan A
2014-05-01
Randomised controlled trials (RCTs) are widely accepted as the preferred study design for evaluating healthcare interventions. When the sample size is determined, a (target) difference is typically specified that the RCT is designed to detect. This provides reassurance that the study will be informative, i.e., should such a difference exist, it is likely to be detected with the required statistical precision. The aim of this review was to identify potential methods for specifying the target difference in an RCT sample size calculation. A comprehensive systematic review of medical and non-medical literature was carried out for methods that could be used to specify the target difference for an RCT sample size calculation. The databases searched were MEDLINE, MEDLINE In-Process, EMBASE, the Cochrane Central Register of Controlled Trials, the Cochrane Methodology Register, PsycINFO, Science Citation Index, EconLit, the Education Resources Information Center (ERIC), and Scopus (for in-press publications); the search period was from 1966 or the earliest date covered, to between November 2010 and January 2011. Additionally, textbooks addressing the methodology of clinical trials and International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) tripartite guidelines for clinical trials were also consulted. A narrative synthesis of methods was produced. Studies that described a method that could be used for specifying an important and/or realistic difference were included. The search identified 11,485 potentially relevant articles from the databases searched. Of these, 1,434 were selected for full-text assessment, and a further nine were identified from other sources. Fifteen clinical trial textbooks and the ICH tripartite guidelines were also reviewed. In total, 777 studies were included, and within them, seven methods were identified-anchor, distribution, health economic, opinion-seeking, pilot study, review of the evidence base, and standardised effect size. A variety of methods are available that researchers can use for specifying the target difference in an RCT sample size calculation. Appropriate methods may vary depending on the aim (e.g., specifying an important difference versus a realistic difference), context (e.g., research question and availability of data), and underlying framework adopted (e.g., Bayesian versus conventional statistical approach). Guidance on the use of each method is given. No single method provides a perfect solution for all contexts.
Genome-Wide Chromosomal Targets of Oncogenic Transcription Factors
2008-04-01
axis. (a) Comparison between STAGE and ChIP-chip when the same sample was analyzed by both methods. The gray line indicates all predicted STAGE targets...numbers of single-hit tags (Y-axis) were plotted against the frequen- cies of those tags in the random ( gray bars) and experimental (black bars) tag...size of 500 bp gave an optimal separation between random and real data. Data shown is for a window size of 500 bp. The gray bars indicate log10 of the
DuFour, Mark R.; Mayer, Christine M.; Kocovsky, Patrick; Qian, Song; Warner, David M.; Kraus, Richard T.; Vandergoot, Christopher
2017-01-01
Hydroacoustic sampling of low-density fish in shallow water can lead to low sample sizes of naturally variable target strength (TS) estimates, resulting in both sparse and variable data. Increasing maximum beam compensation (BC) beyond conventional values (i.e., 3 dB beam width) can recover more targets during data analysis; however, data quality decreases near the acoustic beam edges. We identified the optimal balance between data quantity and quality with increasing BC using a standard sphere calibration, and we quantified the effect of BC on fish track variability, size structure, and density estimates of Lake Erie walleye (Sander vitreus). Standard sphere mean TS estimates were consistent with theoretical values (−39.6 dB) up to 18-dB BC, while estimates decreased at greater BC values. Natural sources (i.e., residual and mean TS) dominated total fish track variation, while contributions from measurement related error (i.e., number of single echo detections (SEDs) and BC) were proportionally low. Increasing BC led to more fish encounters and SEDs per fish, while stability in size structure and density were observed at intermediate values (e.g., 18 dB). Detection of medium to large fish (i.e., age-2+ walleye) benefited most from increasing BC, as proportional changes in size structure and density were greatest in these size categories. Therefore, when TS data are sparse and variable, increasing BC to an optimal value (here 18 dB) will maximize the TS data quantity while limiting lower-quality data near the beam edges.
Yu, Zhan; Li, Yuanyang; Liu, Lisheng; Guo, Jin; Wang, Tingfeng; Yang, Guoqing
2017-11-10
The speckle pattern (line by line) sequential extraction (SPSE) metric is proposed by the one-dimensional speckle intensity level crossing theory. Through the sequential extraction of received speckle information, the speckle metrics for estimating the variation of focusing spot size on a remote diffuse target are obtained. Based on the simulation, we will give some discussions about the SPSE metric range of application under the theoretical conditions, and the aperture size will affect the metric performance of the observation system. The results of the analyses are verified by the experiment. This method is applied to the detection of relative static target (speckled jitter frequency is less than the CCD sampling frequency). The SPSE metric can determine the variation of the focusing spot size over a long distance, moreover, the metric will estimate the spot size under some conditions. Therefore, the monitoring and the feedback of far-field spot will be implemented laser focusing system applications and help the system to optimize the focusing performance.
Namera, Akira; Saito, Takeshi; Ota, Shigenori; Miyazaki, Shota; Oikawa, Hiroshi; Murata, Kazuhiro; Nagao, Masataka
2017-09-29
Monolithic silica in MonoSpin for solid-phase extraction of drugs from whole blood samples was developed to facilitate high-throughput analysis. Monolithic silica of various pore sizes and octadecyl contents were synthesized, and their effects on recovery rates were evaluated. The silica monolith M18-200 (20μm through-pore size, 10.4nm mesopore size, and 17.3% carbon content) achieved the best recovery of the target analytes in whole blood samples. The extraction proceeded with centrifugal force at 1000rpm for 2min, and the eluate was directly injected into the liquid chromatography-mass spectrometry system without any tedious steps such as evaporation of extraction solvents. Under the optimized condition, low detection limits of 0.5-2.0ngmL -1 and calibration ranges up to 1000ngmL -1 were obtained. The recoveries of the target drugs in the whole blood were 76-108% with relative standard deviation of less than 14.3%. These results indicate that the developed method based on monolithic silica is convenient, highly efficient, and applicable for detecting drugs in whole blood samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-04
... approved information collection, the List Sampling Frame Surveys. Revision to burden hours will be needed due to changes in the size of the target population, sampling design, and/or questionnaire length... Agriculture, (202) 720-4333. SUPPLEMENTARY INFORMATION: Title: List Sampling Frame Surveys. OMB Control Number...
USDA-ARS?s Scientific Manuscript database
A two-dimensional chromatography method for analyzing anionic targets (specifically phytate) in complex matrices is described. Prior to quantification by anion exchange chromatography, the sample matrix was prepared by size exclusion chromatography, which removed the majority of matrix complexities....
Morphology of meteoroid and space debris craters on LDEF metal targets
NASA Technical Reports Server (NTRS)
Love, S. G.; Brownlee, D. E.; King, N. L.; Hoerz, F.
1994-01-01
We measured the depths, average diameters, and circularity indices of over 600 micrometeoroid and space debris craters on various metal surfaces exposed to space on the Long Duration Exposure Facility (LDEF) satellite, as a test of some of the formalisms used to convert the diameters of craters on space-exposed surfaces into penetration depths for the purpose of calculating impactor sizes or masses. The topics covered include the following: targe materials orientation; crater measurements and sample populations; effects of oblique impacts; effects of projectile velocity; effects of crater size; effects of target hardness; effects of target density; and effects of projectile properties.
DEVELOPMENT OF AN RH -DENUDED MIE ACTIVE SAMPLING SYSTEM AND TARGETED AEROSOL CALIBRATION
The MIE pDR 1200 nephelometer provides time resolved aerosol concentrations during personal and fixed-site sampling. Active (pumped) operation allows defining an upper PM2.5 particle size, however, this dramatically increases the aerosol mass passing through the phot...
Cook, Jonathan A; Hislop, Jennifer; Adewuyi, Temitope E; Harrild, Kirsten; Altman, Douglas G; Ramsay, Craig R; Fraser, Cynthia; Buckley, Brian; Fayers, Peter; Harvey, Ian; Briggs, Andrew H; Norrie, John D; Fergusson, Dean; Ford, Ian; Vale, Luke D
2014-05-01
The randomised controlled trial (RCT) is widely considered to be the gold standard study for comparing the effectiveness of health interventions. Central to the design and validity of a RCT is a calculation of the number of participants needed (the sample size). The value used to determine the sample size can be considered the 'target difference'. From both a scientific and an ethical standpoint, selecting an appropriate target difference is of crucial importance. Determination of the target difference, as opposed to statistical approaches to calculating the sample size, has been greatly neglected though a variety of approaches have been proposed the current state of the evidence is unclear. The aim was to provide an overview of the current evidence regarding specifying the target difference in a RCT sample size calculation. The specific objectives were to conduct a systematic review of methods for specifying a target difference; to evaluate current practice by surveying triallists; to develop guidance on specifying the target difference in a RCT; and to identify future research needs. The biomedical and social science databases searched were MEDLINE, MEDLINE In-Process & Other Non-Indexed Citations, EMBASE, Cochrane Central Register of Controlled Trials (CENTRAL), Cochrane Methodology Register, PsycINFO, Science Citation Index, EconLit, Education Resources Information Center (ERIC) and Scopus for in-press publications. All were searched from 1966 or the earliest date of the database coverage and searches were undertaken between November 2010 and January 2011. There were three interlinked components: (1) systematic review of methods for specifying a target difference for RCTs - a comprehensive search strategy involving an electronic literature search of biomedical and some non-biomedical databases and clinical trials textbooks was carried out; (2) identification of current trial practice using two surveys of triallists - members of the Society for Clinical Trials (SCT) were invited to complete an online survey and respondents were asked about their awareness and use of, and willingness to recommend, methods; one individual per triallist group [UK Clinical Research Collaboration (UKCRC)-registered Clinical Trials Units (CTUs), Medical Research Council (MRC) UK Hubs for Trials Methodology Research and National Institute for Health Research (NIHR) UK Research Design Services (RDS)] was invited to complete a survey; (3) production of a structured guidance document to aid the design of future trials - the draft guidance was developed utilising the results of the systematic review and surveys by the project steering and advisory groups. Methodological review incorporating electronic searches, review of books and guidelines, two surveys of experts (membership of an international society and UK- and Ireland-based triallists) and development of guidance. The two surveys were sent out to membership of the SCT and UK- and Ireland-based triallists. The review focused on methods for specifying the target difference in a RCT. It was not restricted to any type of intervention or condition. Methods for specifying the target difference for a RCT were considered. The search identified 11,485 potentially relevant studies. In total, 1434 were selected for full-text assessment and 777 were included in the review. Seven methods to specify the target difference for a RCT were identified - anchor, distribution, health economic, opinion-seeking, pilot study, review of evidence base (RoEB) and standardised effect size (SES) - each having important variations in implementation. A total of 216 of the included studies used more than one method. A total of 180 (15%) responses to the SCT survey were received, representing 13 countries. Awareness of methods ranged from 38% (n =69) for the health economic method to 90% (n =162) for the pilot study. Of the 61 surveys sent out to UK triallist groups, 34 (56%) responses were received. Awareness ranged from 97% (n =33) for the RoEB and pilot study methods to only 41% (n =14) for the distribution method. Based on the most recent trial, all bar three groups (91%, n =30) used a formal method. Guidance was developed on the use of each method and the reporting of the sample size calculation in a trial protocol and results paper. There is a clear need for greater use of formal methods to determine the target difference and better reporting of its specification. Raising the standard of RCT sample size calculations and the corresponding reporting of them would aid health professionals, patients, researchers and funders in judging the strength of the evidence and ensuring better use of scarce resources. The Medical Research Council UK and the National Institute for Health Research Joint Methodology Research programme.
Functional modular architecture underlying attentional control in aging.
Monge, Zachary A; Geib, Benjamin R; Siciliano, Rachel E; Packard, Lauren E; Tallman, Catherine W; Madden, David J
2017-07-15
Previous research suggests that age-related differences in attention reflect the interaction of top-down and bottom-up processes, but the cognitive and neural mechanisms underlying this interaction remain an active area of research. Here, within a sample of community-dwelling adults 19-78 years of age, we used diffusion reaction time (RT) modeling and multivariate functional connectivity to investigate the behavioral components and whole-brain functional networks, respectively, underlying bottom-up and top-down attentional processes during conjunction visual search. During functional MRI scanning, participants completed a conjunction visual search task in which each display contained one item that was larger than the other items (i.e., a size singleton) but was not informative regarding target identity. This design allowed us to examine in the RT components and functional network measures the influence of (a) additional bottom-up guidance when the target served as the size singleton, relative to when the distractor served as the size singleton (i.e., size singleton effect) and (b) top-down processes during target detection (i.e., target detection effect; target present vs. absent trials). We found that the size singleton effect (i.e., increased bottom-up guidance) was associated with RT components related to decision and nondecision processes, but these effects did not vary with age. Also, a modularity analysis revealed that frontoparietal module connectivity was important for both the size singleton and target detection effects, but this module became central to the networks through different mechanisms for each effect. Lastly, participants 42 years of age and older, in service of the target detection effect, relied more on between-frontoparietal module connections. Our results further elucidate mechanisms through which frontoparietal regions support attentional control and how these mechanisms vary in relation to adult age. Copyright © 2017 Elsevier Inc. All rights reserved.
The structure of Turkish trait-descriptive adjectives.
Somer, O; Goldberg, L R
1999-03-01
This description of the Turkish lexical project reports some initial findings on the structure of Turkish personality-related variables. In addition, it provides evidence on the effects of target evaluative homogeneity vs. heterogeneity (e.g., samples of well-liked target individuals vs. samples of both liked and disliked targets) on the resulting factor structures, and thus it provides a first test of the conclusions reached by D. Peabody and L. R. Goldberg (1989) using English trait terms. In 2 separate studies, and in 2 types of data sets, clear versions of the Big Five factor structure were found. And both studies replicated and extended the findings of Peabody and Goldberg; virtually orthogonal factors of relatively equal size were found in the homogeneous samples, and a more highly correlated set of factors with relatively large Agreeableness and Conscientiousness dimensions was found in the heterogeneous samples.
Statistical Inference for Data Adaptive Target Parameters.
Hubbard, Alan E; Kherad-Pajouh, Sara; van der Laan, Mark J
2016-05-01
Consider one observes n i.i.d. copies of a random variable with a probability distribution that is known to be an element of a particular statistical model. In order to define our statistical target we partition the sample in V equal size sub-samples, and use this partitioning to define V splits in an estimation sample (one of the V subsamples) and corresponding complementary parameter-generating sample. For each of the V parameter-generating samples, we apply an algorithm that maps the sample to a statistical target parameter. We define our sample-split data adaptive statistical target parameter as the average of these V-sample specific target parameters. We present an estimator (and corresponding central limit theorem) of this type of data adaptive target parameter. This general methodology for generating data adaptive target parameters is demonstrated with a number of practical examples that highlight new opportunities for statistical learning from data. This new framework provides a rigorous statistical methodology for both exploratory and confirmatory analysis within the same data. Given that more research is becoming "data-driven", the theory developed within this paper provides a new impetus for a greater involvement of statistical inference into problems that are being increasingly addressed by clever, yet ad hoc pattern finding methods. To suggest such potential, and to verify the predictions of the theory, extensive simulation studies, along with a data analysis based on adaptively determined intervention rules are shown and give insight into how to structure such an approach. The results show that the data adaptive target parameter approach provides a general framework and resulting methodology for data-driven science.
Statistical methods for identifying and bounding a UXO target area or minefield
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKinstry, Craig A.; Pulsipher, Brent A.; Gilbert, Richard O.
2003-09-18
The sampling unit for minefield or UXO area characterization is typically represented by a geographical block or transect swath that lends itself to characterization by geophysical instrumentation such as mobile sensor arrays. New spatially based statistical survey methods and tools, more appropriate for these unique sampling units have been developed and implemented at PNNL (Visual Sample Plan software, ver. 2.0) with support from the US Department of Defense. Though originally developed to support UXO detection and removal efforts, these tools may also be used in current form or adapted to support demining efforts and aid in the development of newmore » sensors and detection technologies by explicitly incorporating both sampling and detection error in performance assessments. These tools may be used to (1) determine transect designs for detecting and bounding target areas of critical size, shape, and density of detectable items of interest with a specified confidence probability, (2) evaluate the probability that target areas of a specified size, shape and density have not been missed by a systematic or meandering transect survey, and (3) support post-removal verification by calculating the number of transects required to achieve a specified confidence probability that no UXO or mines have been missed.« less
Hislop, Jenni; Adewuyi, Temitope E.; Vale, Luke D.; Harrild, Kirsten; Fraser, Cynthia; Gurung, Tara; Altman, Douglas G.; Briggs, Andrew H.; Fayers, Peter; Ramsay, Craig R.; Norrie, John D.; Harvey, Ian M.; Buckley, Brian; Cook, Jonathan A.
2014-01-01
Background Randomised controlled trials (RCTs) are widely accepted as the preferred study design for evaluating healthcare interventions. When the sample size is determined, a (target) difference is typically specified that the RCT is designed to detect. This provides reassurance that the study will be informative, i.e., should such a difference exist, it is likely to be detected with the required statistical precision. The aim of this review was to identify potential methods for specifying the target difference in an RCT sample size calculation. Methods and Findings A comprehensive systematic review of medical and non-medical literature was carried out for methods that could be used to specify the target difference for an RCT sample size calculation. The databases searched were MEDLINE, MEDLINE In-Process, EMBASE, the Cochrane Central Register of Controlled Trials, the Cochrane Methodology Register, PsycINFO, Science Citation Index, EconLit, the Education Resources Information Center (ERIC), and Scopus (for in-press publications); the search period was from 1966 or the earliest date covered, to between November 2010 and January 2011. Additionally, textbooks addressing the methodology of clinical trials and International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) tripartite guidelines for clinical trials were also consulted. A narrative synthesis of methods was produced. Studies that described a method that could be used for specifying an important and/or realistic difference were included. The search identified 11,485 potentially relevant articles from the databases searched. Of these, 1,434 were selected for full-text assessment, and a further nine were identified from other sources. Fifteen clinical trial textbooks and the ICH tripartite guidelines were also reviewed. In total, 777 studies were included, and within them, seven methods were identified—anchor, distribution, health economic, opinion-seeking, pilot study, review of the evidence base, and standardised effect size. Conclusions A variety of methods are available that researchers can use for specifying the target difference in an RCT sample size calculation. Appropriate methods may vary depending on the aim (e.g., specifying an important difference versus a realistic difference), context (e.g., research question and availability of data), and underlying framework adopted (e.g., Bayesian versus conventional statistical approach). Guidance on the use of each method is given. No single method provides a perfect solution for all contexts. Please see later in the article for the Editors' Summary PMID:24824338
Size separation of analytes using monomeric surfactants
Yeung, Edward S.; Wei, Wei
2005-04-12
A sieving medium for use in the separation of analytes in a sample containing at least one such analyte comprises a monomeric non-ionic surfactant of the of the general formula, B-A, wherein A is a hydrophilic moiety and B is a hydrophobic moiety, present in a solvent at a concentration forming a self-assembled micelle configuration under selected conditions and having an aggregation number providing an equivalent weight capable of effecting the size separation of the sample solution so as to resolve a target analyte(s) in a solution containing the same, the size separation taking place in a chromatography or electrophoresis separation system.
Lot quality assurance sampling (LQAS) for monitoring a leprosy elimination program.
Gupte, M D; Narasimhamurthy, B
1999-06-01
In a statistical sense, prevalences of leprosy in different geographical areas can be called very low or rare. Conventional survey methods to monitor leprosy control programs, therefore, need large sample sizes, are expensive, and are time-consuming. Further, with the lowering of prevalence to the near-desired target level, 1 case per 10,000 population at national or subnational levels, the program administrator's concern will be shifted to smaller areas, e.g., districts, for assessment and, if needed, for necessary interventions. In this paper, Lot Quality Assurance Sampling (LQAS), a quality control tool in industry, is proposed to identify districts/regions having a prevalence of leprosy at or above a certain target level, e.g., 1 in 10,000. This technique can also be considered for identifying districts/regions at or below the target level of 1 per 10,000, i.e., areas where the elimination level is attained. For simulating various situations and strategies, a hypothetical computerized population of 10 million persons was created. This population mimics the actual population in terms of the empirical information on rural/urban distributions and the distribution of households by size for the state of Tamil Nadu, India. Various levels with respect to leprosy prevalence are created using this population. The distribution of the number of cases in the population was expected to follow the Poisson process, and this was also confirmed by examination. Sample sizes and corresponding critical values were computed using Poisson approximation. Initially, villages/towns are selected from the population and from each selected village/town households are selected using systematic sampling. Households instead of individuals are used as sampling units. This sampling procedure was simulated 1000 times in the computer from the base population. The results in four different prevalence situations meet the required limits of Type I error of 5% and 90% Power. It is concluded that after validation under field conditions, this method can be considered for a rapid assessment of the leprosy situation.
McCaffrey, Daniel; Perlman, Judith; Marshall, Grant N.; Hambarsoomians, Katrin
2010-01-01
We consider situations in which externally observable characteristics allow experts to quickly categorize individual households as likely or unlikely to contain a member of a rare target population. This classification can form the basis of disproportionate stratified sampling such that households classified as “unlikely” are sampled at a lower rate than those classified as “likely,” thereby reducing screening costs. Design weights account for this approach and allow unbiased estimates for the target population. We demonstrate that with sensitivity and specificity of expert classification at least 70%, and ideally at least 80%, our approach can economically increase effective sample size for a rare population. We develop heuristics for implementing this approach and demonstrate that sensitivity drives design effects and screening costs whereas specificity only drives the latter. We demonstrate that the potential gains from this approach increase as the target population becomes rarer. We further show that for most applications, unlikely strata should be sampled at 1/6 to ½ the rate of likely strata. This approach was applied to a survey of Cambodian immigrants in which the 82% of households rated “unlikely” were sampled at ¼ the rate as “likely” households, reducing screening from 9.4 to 4.0 approaches per complete. Sensitivity and specificity were 86% and 91% respectively. Weighted estimation had a design effect of 1.26 so screening costs per effective sample size were reduced 47%. We also note that in this instance, expert classification appeared to be uncorrelated with survey outcomes of interest among eligibles. PMID:20936050
Andreev, Victor P; Gillespie, Brenda W; Helfand, Brian T; Merion, Robert M
2016-01-01
Unsupervised classification methods are gaining acceptance in omics studies of complex common diseases, which are often vaguely defined and are likely the collections of disease subtypes. Unsupervised classification based on the molecular signatures identified in omics studies have the potential to reflect molecular mechanisms of the subtypes of the disease and to lead to more targeted and successful interventions for the identified subtypes. Multiple classification algorithms exist but none is ideal for all types of data. Importantly, there are no established methods to estimate sample size in unsupervised classification (unlike power analysis in hypothesis testing). Therefore, we developed a simulation approach allowing comparison of misclassification errors and estimating the required sample size for a given effect size, number, and correlation matrix of the differentially abundant proteins in targeted proteomics studies. All the experiments were performed in silico. The simulated data imitated the expected one from the study of the plasma of patients with lower urinary tract dysfunction with the aptamer proteomics assay Somascan (SomaLogic Inc, Boulder, CO), which targeted 1129 proteins, including 330 involved in inflammation, 180 in stress response, 80 in aging, etc. Three popular clustering methods (hierarchical, k-means, and k-medoids) were compared. K-means clustering performed much better for the simulated data than the other two methods and enabled classification with misclassification error below 5% in the simulated cohort of 100 patients based on the molecular signatures of 40 differentially abundant proteins (effect size 1.5) from among the 1129-protein panel. PMID:27524871
Environmental DNA particle size distribution from Brook Trout (Salvelinus fontinalis)
Taylor M. Wilcox; Kevin S. McKelvey; Michael K. Young; Winsor H. Lowe; Michael K. Schwartz
2015-01-01
Environmental DNA (eDNA) sampling has become a widespread approach for detecting aquatic animals with high potential for improving conservation biology. However, little research has been done to determine the size of particles targeted by eDNA surveys. In this study, we conduct particle distribution analysis of eDNA from a captive Brook Trout (Salvelinus fontinalis) in...
Bayesian sample size calculations in phase II clinical trials using a mixture of informative priors.
Gajewski, Byron J; Mayo, Matthew S
2006-08-15
A number of researchers have discussed phase II clinical trials from a Bayesian perspective. A recent article by Mayo and Gajewski focuses on sample size calculations, which they determine by specifying an informative prior distribution and then calculating a posterior probability that the true response will exceed a prespecified target. In this article, we extend these sample size calculations to include a mixture of informative prior distributions. The mixture comes from several sources of information. For example consider information from two (or more) clinicians. The first clinician is pessimistic about the drug and the second clinician is optimistic. We tabulate the results for sample size design using the fact that the simple mixture of Betas is a conjugate family for the Beta- Binomial model. We discuss the theoretical framework for these types of Bayesian designs and show that the Bayesian designs in this paper approximate this theoretical framework. Copyright 2006 John Wiley & Sons, Ltd.
Blinded sample size re-estimation in three-arm trials with 'gold standard' design.
Mütze, Tobias; Friede, Tim
2017-10-15
In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Small target pre-detection with an attention mechanism
NASA Astrophysics Data System (ADS)
Wang, Yuehuan; Zhang, Tianxu; Wang, Guoyou
2002-04-01
We introduce the concept of predetection based on an attention mechanism to improve the efficiency of small-target detection by limiting the image region of detection. According to the characteristics of small-target detection, local contrast is taken as the only feature in predetection and a nonlinear sampling model is adopted to make the predetection adaptive to detect small targets with different area sizes. To simplify the predetection itself and decrease the false alarm probability, neighboring nodes in the sampling grid are used to generate a saliency map, and a short-term memory is adopted to accelerate the `pop-out' of targets. We discuss the fact that the proposed approach is simple enough in computational complexity. In addition, even in a cluttered background, attention can be led to targets in a satisfying few iterations, which ensures that the detection efficiency will not be decreased due to false alarms. Experimental results are presented to demonstrate the applicability of the approach.
Fujikawa, Hiroshi
2017-01-01
Microbial concentration in samples of a food product lot has been generally assumed to follow the log-normal distribution in food sampling, but this distribution cannot accommodate the concentration of zero. In the present study, first, a probabilistic study with the most probable number (MPN) technique was done for a target microbe present at a low (or zero) concentration in food products. Namely, based on the number of target pathogen-positive samples in the total samples of a product found by a qualitative, microbiological examination, the concentration of the pathogen in the product was estimated by means of the MPN technique. The effects of the sample size and the total sample number of a product were then examined. Second, operating characteristic (OC) curves for the concentration of a target microbe in a product lot were generated on the assumption that the concentration of a target microbe could be expressed with the Poisson distribution. OC curves for Salmonella and Cronobacter sakazakii in powdered formulae for infants and young children were successfully generated. The present study suggested that the MPN technique and the Poisson distribution would be useful for qualitative microbiological test data analysis for a target microbe whose concentration in a lot is expected to be low.
High-resolution Antibody Array Analysis of Childhood Acute Leukemia Cells*
Kanderova, Veronika; Kuzilkova, Daniela; Stuchly, Jan; Vaskova, Martina; Brdicka, Tomas; Fiser, Karel; Hrusak, Ondrej; Lund-Johansen, Fridtjof
2016-01-01
Acute leukemia is a disease pathologically manifested at both genomic and proteomic levels. Molecular genetic technologies are currently widely used in clinical research. In contrast, sensitive and high-throughput proteomic techniques for performing protein analyses in patient samples are still lacking. Here, we used a technology based on size exclusion chromatography followed by immunoprecipitation of target proteins with an antibody bead array (Size Exclusion Chromatography-Microsphere-based Affinity Proteomics, SEC-MAP) to detect hundreds of proteins from a single sample. In addition, we developed semi-automatic bioinformatics tools to adapt this technology for high-content proteomic screening of pediatric acute leukemia patients. To confirm the utility of SEC-MAP in leukemia immunophenotyping, we tested 31 leukemia diagnostic markers in parallel by SEC-MAP and flow cytometry. We identified 28 antibodies suitable for both techniques. Eighteen of them provided excellent quantitative correlation between SEC-MAP and flow cytometry (p < 0.05). Next, SEC-MAP was applied to examine 57 diagnostic samples from patients with acute leukemia. In this assay, we used 632 different antibodies and detected 501 targets. Of those, 47 targets were differentially expressed between at least two of the three acute leukemia subgroups. The CD markers correlated with immunophenotypic categories as expected. From non-CD markers, we found DBN1, PAX5, or PTK2 overexpressed in B-cell precursor acute lymphoblastic leukemias, LAT, SH2D1A, or STAT5A overexpressed in T-cell acute lymphoblastic leukemias, and HCK, GLUD1, or SYK overexpressed in acute myeloid leukemias. In addition, OPAL1 overexpression corresponded to ETV6-RUNX1 chromosomal translocation. In summary, we demonstrated that SEC-MAP technology is a powerful tool for detecting hundreds of proteins in clinical samples obtained from pediatric acute leukemia patients. It provides information about protein size and reveals differences in protein expression between particular leukemia subgroups. Forty-seven of SEC-MAP identified targets were validated by other conventional method in this study. PMID:26785729
Characterization studies of prototype ISOL targets for the RIA
NASA Astrophysics Data System (ADS)
Greene, John P.; Burtseva, Tatiana; Neubauer, Janelle; Nolen, Jerry A.; Villari, Antonio C. C.; Gomes, Itacil C.
2005-12-01
Targets employing refractory compounds are being developed for the rare isotope accelerator (RIA) facility to produce ion species far from stability. With the 100 kW beams proposed for the production targets, dissipation of heat becomes a challenging issue. In our two-step target design, neutrons are generated in a refractory primary target, inducing fission in the surrounding uranium carbide. The interplay of density, grain size, thermal conductivity and diffusion properties of the UC2 needs to be well understood before fabrication. Thin samples of uranium carbide were prepared for thermal conductivity measurements using an electron beam to heat the sample and an optical pyrometer to observe the thermal radiation. Release efficiencies and independent thermal analysis on these samples are being undertaken at Oak Ridge National Laboratory (ORNL). An alternate target concept for RIA, the tilted slab approach promises to be simple with fast ion release and capable of withstanding high beam intensities while providing considerable yields via spallation. A proposed small business innovative research (SBIR) project will design a prototype tilted target, exploring the materials needed for fabrication and testing at an irradiation facility to address issues of heat transfer and stresses within the target.
Visual context processing deficits in schizophrenia: effects of deafness and disorganization.
Horton, Heather K; Silverstein, Steven M
2011-07-01
Visual illusions allow for strong tests of perceptual functioning. Perceptual impairments can produce superior task performance on certain tasks (i.e., more veridical perception), thereby avoiding generalized deficit confounds while tapping mechanisms that are largely outside of conscious control. Using a task based on the Ebbinghaus illusion, a perceptual phenomenon where the perceived size of a central target object is affected by the size of surrounding inducers, we tested hypotheses related to visual integration in deaf (n = 31) and hearing (n = 34) patients with schizophrenia. In past studies, psychiatrically healthy samples displayed increased visual integration relative to schizophrenia samples and thus were less able to correctly judge target sizes. Deafness, and especially the use of sign language, leads to heightened sensitivity to peripheral visual cues and increased sensitivity to visual context. Therefore, relative to hearing subjects, deaf subjects were expected to display increased context sensitivity (ie, a more normal illusion effect as evidenced by a decreased ability to correctly judge central target sizes). Confirming the hypothesis, deaf signers were significantly more sensitive to the illusion than nonsigning hearing patients. Moreover, an earlier age of sign language acquisition, higher levels of linguistic ability, and shorter illness duration were significantly related to increased context sensitivity. As predicted, disorganization was associated with reduced context sensitivity for all subjects. The primary implications of these data are that perceptual organization impairment in schizophrenia is plastic and that it is related to a broader failure in coordinating cognitive activity.
Whale sharks target dense prey patches of sergestid shrimp off Tanzania
Rohner, Christoph A.; Armstrong, Amelia J.; Pierce, Simon J.; Prebble, Clare E. M.; Cagua, E. Fernando; Cochran, Jesse E. M.; Berumen, Michael L.; Richardson, Anthony J.
2015-01-01
Large planktivores require high-density prey patches to make feeding energetically viable. This is a major challenge for species living in tropical and subtropical seas, such as whale sharks Rhincodon typus. Here, we characterize zooplankton biomass, size structure and taxonomic composition from whale shark feeding events and background samples at Mafia Island, Tanzania. The majority of whale sharks were feeding (73%, 380 of 524 observations), with the most common behaviour being active surface feeding (87%). We used 20 samples collected from immediately adjacent to feeding sharks and an additional 202 background samples for comparison to show that plankton biomass was ∼10 times higher in patches where whale sharks were feeding (25 vs. 2.6 mg m−3). Taxonomic analyses of samples showed that the large sergestid Lucifer hanseni (∼10 mm) dominated while sharks were feeding, accounting for ∼50% of identified items, while copepods (<2 mm) dominated background samples. The size structure was skewed towards larger animals representative of L.hanseni in feeding samples. Thus, whale sharks at Mafia Island target patches of dense, large, zooplankton dominated by sergestids. Large planktivores, such as whale sharks, which generally inhabit warm oligotrophic waters, aggregate in areas where they can feed on dense prey to obtain sufficient energy. PMID:25814777
Biostatistics Series Module 5: Determining Sample Size
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437
Reduction of Racial Disparities in Prostate Cancer
2005-12-01
erectile dysfunction , and female sexual dysfunction ). Wherever possible, the questions and scales employed on BACH were selected from published...Methods. A racially and ethnically diverse community-based survey of adults aged 30-79 years in Boston, Massachusetts. The BACH survey has...recruited adults in three racial/ethnic groups: Latino, African American, and White using a stratified cluster sample. The target sample size is equally
NMR/MRI with hyperpolarized gas and high Tc SQUID
Schlenga, Klaus; de Souza, Ricardo E.; Wong-Foy, Annjoe; Clarke, John; Pines, Alexander
2000-01-01
A method and apparatus for the detection of nuclear magnetic resonance (NMR) signals and production of magnetic resonance imaging (MRI) from samples combines the use of hyperpolarized inert gases to enhance the NMR signals from target nuclei in a sample and a high critical temperature (Tc) superconducting quantum interference device (SQUID) to detect the NMR signals. The system operates in static magnetic fields of 3 mT or less (down to 0.1 mT), and at temperatures from liquid nitrogen (77K) to room temperature. Sample size is limited only by the size of the magnetic field coils and not by the detector. The detector is a high Tc SQUID magnetometer designed so that the SQUID detector can be very close to the sample, which can be at room temperature.
78 FR 17921 - Notice of Intent To Seek Reinstatement of an Information Collection
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-25
... may be needed due to changes in the size of the target population, sampling design, and/or questionnaire length. DATES: Comments on this notice must be received by May 24, 2013 to be assured of...
Sepúlveda, Nuno; Drakeley, Chris
2015-04-03
In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.
Predicting Ideological Prejudice
Brandt, Mark J.
2017-01-01
A major shortcoming of current models of ideological prejudice is that although they can anticipate the direction of the association between participants’ ideology and their prejudice against a range of target groups, they cannot predict the size of this association. I developed and tested models that can make specific size predictions for this association. A quantitative model that used the perceived ideology of the target group as the primary predictor of the ideology-prejudice relationship was developed with a representative sample of Americans (N = 4,940) and tested against models using the perceived status of and choice to belong to the target group as predictors. In four studies (total N = 2,093), ideology-prejudice associations were estimated, and these observed estimates were compared with the models’ predictions. The model that was based only on perceived ideology was the most parsimonious with the smallest errors. PMID:28394693
Frison, Severine; Kerac, Marko; Checchi, Francesco; Nicholas, Jennifer
2017-01-01
The assessment of the prevalence of acute malnutrition in children under five is widely used for the detection of emergencies, planning interventions, advocacy, and monitoring and evaluation. This study examined PROBIT Methods which convert parameters (mean and standard deviation (SD)) of a normally distributed variable to a cumulative probability below any cut-off to estimate acute malnutrition in children under five using Middle-Upper Arm Circumference (MUAC). We assessed the performance of: PROBIT Method I, with mean MUAC from the survey sample and MUAC SD from a database of previous surveys; and PROBIT Method II, with mean and SD of MUAC observed in the survey sample. Specifically, we generated sub-samples from 852 survey datasets, simulating 100 surveys for eight sample sizes. Overall the methods were tested on 681 600 simulated surveys. PROBIT methods relying on sample sizes as small as 50 had better performance than the classic method for estimating and classifying the prevalence of acute malnutrition. They had better precision in the estimation of acute malnutrition for all sample sizes and better coverage for smaller sample sizes, while having relatively little bias. They classified situations accurately for a threshold of 5% acute malnutrition. Both PROBIT methods had similar outcomes. PROBIT Methods have a clear advantage in the assessment of acute malnutrition prevalence based on MUAC, compared to the classic method. Their use would require much lower sample sizes, thus enable great time and resource savings and permit timely and/or locally relevant prevalence estimates of acute malnutrition for a swift and well-targeted response.
VNIR hyperspectral background characterization methods in adverse weather conditions
NASA Astrophysics Data System (ADS)
Romano, João M.; Rosario, Dalton; Roth, Luz
2009-05-01
Hyperspectral technology is currently being used by the military to detect regions of interest where potential targets may be located. Weather variability, however, may affect the ability for an algorithm to discriminate possible targets from background clutter. Nonetheless, different background characterization approaches may facilitate the ability for an algorithm to discriminate potential targets over a variety of weather conditions. In a previous paper, we introduced a new autonomous target size invariant background characterization process, the Autonomous Background Characterization (ABC) or also known as the Parallel Random Sampling (PRS) method, features a random sampling stage, a parallel process to mitigate the inclusion by chance of target samples into clutter background classes during random sampling; and a fusion of results at the end. In this paper, we will demonstrate how different background characterization approaches are able to improve performance of algorithms over a variety of challenging weather conditions. By using the Mahalanobis distance as the standard algorithm for this study, we compare the performance of different characterization methods such as: the global information, 2 stage global information, and our proposed method, ABC, using data that was collected under a variety of adverse weather conditions. For this study, we used ARDEC's Hyperspectral VNIR Adverse Weather data collection comprised of heavy, light, and transitional fog, light and heavy rain, and low light conditions.
Mafic and felsic igneous rocks at Gale crater
NASA Astrophysics Data System (ADS)
Sautter, Violaine; Cousin, Agnès; Mangold, Nicolas; Toplis, Michael; Fabre, Cécile; Forni, Olivier; Payré, Valérie; Gasnault, Olivier; Ollila, Anne; Rapin, William; Fisk, Martin; Meslin, Pierre-Yves; Wiens, Roger; Maurice, Sylvestre; Lasue, Jérémie; Newsom, Horton; Lanza, Nina
2015-04-01
The Curiosity rover landed at Gale, an early Hesperian age crater formed within Noachian terrains on Mars. The rover encountered a great variety of igneous rocks to the west of the Yellow Knife Bay sedimentary unit (from sol 13 to 800) which are float rocks or clasts in conglomerates. Textural and compositional analyses using MastCam and ChemCam Remote micro Imager (RMI) and Laser Induced Breakdown Spectroscopy (LIBS) with a ˜300-500 µm laser spot lead to the recognition of 53 massive (non layered) igneous targets, both intrusive and effusive, ranging from mafic rocks where feldspars form less than 50% of the rock to felsic samples where feldspar is the dominant mineral. From morphology, color, grain size, patina and chemistry, at least 5 different groups of rocks have been identified: (1) a basaltic class with shiny aspect, conchoidal frature, no visible grains (less than 0.2mm) in a dark matrix with a few mm sized light-toned crystals (21 targets) (2) a porphyritic trachyandesite class with light-toned, bladed and polygonal crystals 1-20 mm in length set in a dark gray mesostasis (11 targets); (3) light toned trachytes with no visible grains sometimes vesiculated or forming flat targets (6 targets); (4) microgabbro-norite (grain size < 1mm) and gabbro-norite (grain size >1 mm) showing dark and light toned crystals in similar proportion ( 8 targets); (5) light-toned diorite/granodiorite showing coarse granular (>4 mm) texture either pristine or blocky, strongly weathered rocks (9 rock targets). Overall, these rocks comprise 2 distinct geochemical series: (i) an alkali-suite: basanite, gabbro trachy-andesite and trachyte) including porphyritic and aphyric members; (ii) quartz-normative intrusives close to granodioritic composition. The former looks like felsic clasts recently described in two SNC meteorites (NWA 7034 and 7533), the first Noachian breccia sampling the martian regolith. It is geochemically consistent with differentiation of liquids produced by low degrees of partial melting of the primitive martian mantle. The latter rock-type is unlike anything proposed in the literature for Mars but resembles Archean TTG's encountered on Earth related to the building of continental crust. This work thus provides the first in-situ detection of low density leucocratic igneous rocks on Mars in the southern highlands.
Uranium carbide fission target R&D for RIA - an update
NASA Astrophysics Data System (ADS)
Greene, J. P.; Levand, A.; Nolen, J.; Burtseva, T.
2004-12-01
For the Rare Isotope Accelerator (RIA) facility, ISOL targets employing refractory compounds of uranium are being developed to produce radioactive ions for post-acceleration. The availability of refractory uranium compounds in forms that have good thermal conductivity, relatively high density, and adequate release properties for short-lived isotopes remains an important issue. Investigations using commercially obtained uranium carbide material and prepared into targets involving various binder materials have been carried out at ANL. Thin sample pellets have been produced for measurements of thermal conductivity using a new method based on electron bombardment with the thermal radiation observed using a two-color optical pyrometer and performed on samples as a function of grain size, pressing pressure and sintering temperature. Manufacture of uranium carbide powder has now been achieved at ANL. Simulations have been carried out on the thermal behavior of the secondary target assembly incorporating various heat shield configurations.
Efficient mitigation strategies for epidemics in rural regions.
Scoglio, Caterina; Schumm, Walter; Schumm, Phillip; Easton, Todd; Roy Chowdhury, Sohini; Sydney, Ali; Youssef, Mina
2010-07-13
Containing an epidemic at its origin is the most desirable mitigation. Epidemics have often originated in rural areas, with rural communities among the first affected. Disease dynamics in rural regions have received limited attention, and results of general studies cannot be directly applied since population densities and human mobility factors are very different in rural regions from those in cities. We create a network model of a rural community in Kansas, USA, by collecting data on the contact patterns and computing rates of contact among a sampled population. We model the impact of different mitigation strategies detecting closely connected groups of people and frequently visited locations. Within those groups and locations, we compare the effectiveness of random and targeted vaccinations using a Susceptible-Exposed-Infected-Recovered compartmental model on the contact network. Our simulations show that the targeted vaccinations of only 10% of the sampled population reduced the size of the epidemic by 34.5%. Additionally, if 10% of the population visiting one of the most popular locations is randomly vaccinated, the epidemic size is reduced by 19%. Our results suggest a new implementation of a highly effective strategy for targeted vaccinations through the use of popular locations in rural communities.
Integrative genetic risk prediction using non-parametric empirical Bayes classification.
Zhao, Sihai Dave
2017-06-01
Genetic risk prediction is an important component of individualized medicine, but prediction accuracies remain low for many complex diseases. A fundamental limitation is the sample sizes of the studies on which the prediction algorithms are trained. One way to increase the effective sample size is to integrate information from previously existing studies. However, it can be difficult to find existing data that examine the target disease of interest, especially if that disease is rare or poorly studied. Furthermore, individual-level genotype data from these auxiliary studies are typically difficult to obtain. This article proposes a new approach to integrative genetic risk prediction of complex diseases with binary phenotypes. It accommodates possible heterogeneity in the genetic etiologies of the target and auxiliary diseases using a tuning parameter-free non-parametric empirical Bayes procedure, and can be trained using only auxiliary summary statistics. Simulation studies show that the proposed method can provide superior predictive accuracy relative to non-integrative as well as integrative classifiers. The method is applied to a recent study of pediatric autoimmune diseases, where it substantially reduces prediction error for certain target/auxiliary disease combinations. The proposed method is implemented in the R package ssa. © 2016, The International Biometric Society.
Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won
2012-01-01
Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.
Lower Limits on Aperture Size for an ExoEarth Detecting Coronagraphic Mission
NASA Technical Reports Server (NTRS)
Stark, Christopher C.; Roberge, Aki; Mandell, Avi; Clampin, Mark; Domagal-Goldman, Shawn D.; McElwain, Michael W.; Stapelfeldt, Karl R.
2015-01-01
The yield of Earth-like planets will likely be a primary science metric for future space-based missions that will drive telescope aperture size. Maximizing the exoEarth candidate yield is therefore critical to minimizing the required aperture. Here we describe a method for exoEarth candidate yield maximization that simultaneously optimizes, for the first time, the targets chosen for observation, the number of visits to each target, the delay time between visits, and the exposure time of every observation. This code calculates both the detection time and multiwavelength spectral characterization time required for planets. We also refine the astrophysical assumptions used as inputs to these calculations, relying on published estimates of planetary occurrence rates as well as theoretical and observational constraints on terrestrial planet sizes and classical habitable zones. Given these astrophysical assumptions, optimistic telescope and instrument assumptions, and our new completeness code that produces the highest yields to date, we suggest lower limits on the aperture size required to detect and characterize a statistically motivated sample of exoEarths.
van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald
2017-12-04
Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the measurement of hours worked each week by GPs strongly varied according to the number of GPs included and the frequency of measurements per GP during the week measured. The best balance between both dimensions will depend upon different circumstances, such as the target group and the budget available.
Assessment of spray deposition with water-sensitive paper cards
USDA-ARS?s Scientific Manuscript database
Spatial distributions of spray droplets discharged from an airblast sprayer, were sampled on pairs of absorbent paper (AP) and water-sensitive paper (WSP) targets at several distances from the sprayer. Spray solutions, containing a fluorescent tracer, were discharged from two size nozzles to achiev...
DNA-based species detection capabilities using laser transmission spectroscopy
Mahon, A. R.; Barnes, M. A.; Li, F.; Egan, S. P.; Tanner, C. E.; Ruggiero, S. T.; Feder, J. L.; Lodge, D. M.
2013-01-01
Early detection of invasive species is critical for effective biocontrol to mitigate potential ecological and economic damage. Laser transmission spectroscopy (LTS) is a powerful solution offering real-time, DNA-based species detection in the field. LTS can measure the size, shape and number of nanoparticles in a solution and was used here to detect size shifts resulting from hybridization of the polymerase chain reaction product to nanoparticles functionalized with species-specific oligonucleotide probes or with the species-specific oligonucleotide probes alone. We carried out a series of DNA detection experiments using the invasive freshwater quagga mussel (Dreissena bugensis) to evaluate the capability of the LTS platform for invasive species detection. Specifically, we tested LTS sensitivity to (i) DNA concentrations of a single target species, (ii) the presence of a target species within a mixed sample of other closely related species, (iii) species-specific functionalized nanoparticles versus species-specific oligonucleotide probes alone, and (iv) amplified DNA fragments versus unamplified genomic DNA. We demonstrate that LTS is a highly sensitive technique for rapid target species detection, with detection limits in the picomolar range, capable of successful identification in multispecies samples containing target and non-target species DNA. These results indicate that the LTS DNA detection platform will be useful for field application of target species. Additionally, we find that LTS detection is effective with species-specific oligonucleotide tags alone or when they are attached to polystyrene nanobeads and with both amplified and unamplified DNA, indicating that the technique may also have versatility for broader applications. PMID:23015524
Cook, Jonathan A; Hislop, Jennifer M; Altman, Doug G; Briggs, Andrew H; Fayers, Peter M; Norrie, John D; Ramsay, Craig R; Harvey, Ian M; Vale, Luke D
2014-06-01
Central to the design of a randomised controlled trial (RCT) is a calculation of the number of participants needed. This is typically achieved by specifying a target difference, which enables the trial to identify a difference of a particular magnitude should one exist. Seven methods have been proposed for formally determining what the target difference should be. However, in practice, it may be driven by convenience or some other informal basis. It is unclear how aware the trialist community is of these formal methods or whether they are used. To determine current practice regarding the specification of the target difference by surveying trialists. Two surveys were conducted: (1) Members of the Society for Clinical Trials (SCT): participants were invited to complete an online survey through the society's email distribution list. Respondents were asked about their awareness, use of, and willingness to recommend methods; (2) Leading UK- and Ireland-based trialists: the survey was sent to UK Clinical Research Collaboration registered Clinical Trials Units, Medical Research Council UK Hubs for Trial Methodology Research, and the Research Design Services of the National Institute for Health Research. This survey also included questions about the most recent trial developed by the respondent's group. Survey 1: Of the 1182 members on the SCT membership email distribution list, 180 responses were received (15%). Awareness of methods ranged from 69 (38%) for health economic methods to 162 (90%) for pilot study. Willingness to recommend among those who had used a particular method ranged from 56% for the opinion-seeking method to 89% for the review of evidence-base method. Survey 2: Of the 61 surveys sent out, 34 (56%) responses were received. Awareness of methods ranged from 33 (97%) for the review of evidence-base and pilot methods to 14 (41%) for the distribution method. The highest level of willingness to recommend among users was for the anchor method (87%). Based upon the most recent trial, the target difference was usually one viewed as important by a stakeholder group, mostly also viewed as a realistic difference given the interventions under evaluation, and sometimes one that led to an achievable sample size. The response rates achieved were relatively low despite the surveys being short, well presented, and having utilised reminders. Substantial variations in practice exist with awareness, use, and willingness to recommend methods varying substantially. The findings support the view that sample size calculation is a more complex process than would appear to be the case from trial reports and protocols. Guidance on approaches for sample size estimation may increase both awareness and use of appropriate formal methods. © The Author(s), 2014.
NASA Astrophysics Data System (ADS)
Jux, Maximilian; Finke, Benedikt; Mahrholz, Thorsten; Sinapius, Michael; Kwade, Arno; Schilde, Carsten
2017-04-01
Several epoxy Al(OH)O (boehmite) dispersions in an epoxy resin are produced in a kneader to study the mechanistic correlation between the nanoparticle size and mechanical properties of the prepared nanocomposites. The agglomerate size is set by a targeted variation in solid content and temperature during dispersion, resulting in a different level of stress intensity and thus a different final agglomerate size during the process. The suspension viscosity was used for the estimation of stress energy in laminar shear flow. Agglomerate size measurements are executed via dynamic light scattering to ensure the quality of the produced dispersions. Furthermore, various nanocomposite samples are prepared for three-point bending, tension, and fracture toughness tests. The screening of the size effect is executed with at least seven samples per agglomerate size and test method. The variation of solid content is found to be a reliable method to adjust the agglomerate size between 138-354 nm during dispersion. The size effect on the Young's modulus and the critical stress intensity is only marginal. Nevertheless, there is a statistically relevant trend showing a linear increase with a decrease in agglomerate size. In contrast, the size effect is more dominant to the sample's strain and stress at failure. Unlike microscaled agglomerates or particles, which lead to embrittlement of the composite material, nanoscaled agglomerates or particles cause the composite elongation to be nearly of the same level as the base material. The observed effect is valid for agglomerate sizes between 138-354 nm and a particle mass fraction of 10 wt%.
Ambrosius, Walter T; Polonsky, Tamar S; Greenland, Philip; Goff, David C; Perdue, Letitia H; Fortmann, Stephen P; Margolis, Karen L; Pajewski, Nicholas M
2012-04-01
Although observational evidence has suggested that the measurement of coronary artery calcium (CAC) may improve risk stratification for cardiovascular events and thus help guide the use of lipid-lowering therapy, this contention has not been evaluated within the context of a randomized trial. The Value of Imaging in Enhancing the Wellness of Your Heart (VIEW) trial is proposed as a randomized study in participants at low intermediate risk of future coronary heart disease (CHD) events to evaluate whether CAC testing leads to improved patient outcomes. To describe the challenges encountered in designing a prototypical screening trial and to examine the impact of uncertainty on power. The VIEW trial was designed as an effectiveness clinical trial to examine the benefit of CAC testing to guide therapy on a primary outcome consisting of a composite of nonfatal myocardial infarction, probable or definite angina with revascularization, resuscitated cardiac arrest, nonfatal stroke (not transient ischemic attack (TIA)), CHD death, stroke death, other atherosclerotic death, or other cardiovascular disease (CVD) death. Many critical choices were faced in designing the trial, including (1) the choice of primary outcome, (2) the choice of therapy, (3) the target population with corresponding ethical issues, (4) specifications of assumptions for sample size calculations, and (5) impact of uncertainty in these assumptions on power/sample size determination. We have proposed a sample size of 30,000 (800 events), which provides 92.7% power. Alternatively, sample sizes of 20,228 (539 events), 23,138 (617 events), and 27,078 (722 events) provide 80%, 85%, and 90% power. We have also allowed for uncertainty in our assumptions by computing average power integrated over specified prior distributions. This relaxation of specificity indicates a reduction in power, dropping to 89.9% (95% confidence interval (CI): 89.8-89.9) for a sample size of 30,000. Samples sizes of 20,228, 23,138, and 27,078 provide power of 78.0% (77.9-78.0), 82.5% (82.5-82.6), and 87.2% (87.2-87.3), respectively. These power estimates are dependent on form and parameters of the prior distributions. Despite the pressing need for a randomized trial to evaluate the utility of CAC testing, conduct of such a trial requires recruiting a large patient population, making efficiency of critical importance. The large sample size is primarily due to targeting a study population at relatively low risk of a CVD event. Our calculations also illustrate the importance of formally considering uncertainty in power calculations of large trials as standard power calculations may tend to overestimate power.
Ambrosius, Walter T.; Polonsky, Tamar S.; Greenland, Philip; Goff, David C.; Perdue, Letitia H.; Fortmann, Stephen P.; Margolis, Karen L.; Pajewski, Nicholas M.
2014-01-01
Background Although observational evidence has suggested that the measurement of CAC may improve risk stratification for cardiovascular events and thus help guide the use of lipid-lowering therapy, this contention has not been evaluated within the context of a randomized trial. The Value of Imaging in Enhancing the Wellness of Your Heart (VIEW) trial is proposed as a randomized study in participants at low intermediate risk of future coronary heart disease (CHD) events to evaluate whether coronary artery calcium (CAC) testing leads to improved patient outcomes. Purpose To describe the challenges encountered in designing a prototypical screening trial and to examine the impact of uncertainty on power. Methods The VIEW trial was designed as an effectiveness clinical trial to examine the benefit of CAC testing to guide therapy on a primary outcome consisting of a composite of non-fatal myocardial infarction, probable or definite angina with revascularization, resuscitated cardiac arrest, non-fatal stroke (not transient ischemic attack (TIA)), CHD death, stroke death, other atherosclerotic death, or other cardiovascular disease (CVD) death. Many critical choices were faced in designing the trial, including: (1) the choice of primary outcome, (2) the choice of therapy, (3) the target population with corresponding ethical issues, (4) specifications of assumptions for sample size calculations, and (5) impact of uncertainty in these assumptions on power/sample size determination. Results We have proposed a sample size of 30,000 (800 events) which provides 92.7% power. Alternatively, sample sizes of 20,228 (539 events), 23,138 (617 events) and 27,078 (722 events) provide 80, 85, and 90% power. We have also allowed for uncertainty in our assumptions by computing average power integrated over specified prior distributions. This relaxation of specificity indicates a reduction in power, dropping to 89.9% (95% confidence interval (CI): 89.8 to 89.9) for a sample size of 30,000. Samples sizes of 20,228, 23,138, and 27,078 provide power of 78.0% (77.9 to 78.0), 82.5% (82.5 to 82.6), and 87.2% (87.2 to 87.3), respectively. Limitations These power estimates are dependent on form and parameters of the prior distributions. Conclusions Despite the pressing need for a randomized trial to evaluate the utility of CAC testing, conduct of such a trial requires recruiting a large patient population, making efficiency of critical importance. The large sample size is primarily due to targeting a study population at relatively low risk of a CVD event. Our calculations also illustrate the importance of formally considering uncertainty in power calculations of large trials as standard power calculations may tend to overestimate power. PMID:22333998
Sampling strategies and biodiversity of influenza A subtypes in wild birds
Olson, Sarah H.; Parmley, Jane; Soos, Catherine; Gilbert, Martin; Latore-Margalef, Neus; Hall, Jeffrey S.; Hansbro, Phillip M.; Leighton, Frank; Munster, Vincent; Joly, Damien
2014-01-01
Wild aquatic birds are recognized as the natural reservoir of avian influenza A viruses (AIV), but across high and low pathogenic AIV strains, scientists have yet to rigorously identify most competent hosts for the various subtypes. We examined 11,870 GenBank records to provide a baseline inventory and insight into patterns of global AIV subtype diversity and richness. Further, we conducted an extensive literature review and communicated directly with scientists to accumulate data from 50 non-overlapping studies and over 250,000 birds to assess the status of historic sampling effort. We then built virus subtype sample-based accumulation curves to better estimate sample size targets that capture a specific percentage of virus subtype richness at seven sampling locations. Our study identifies a sampling methodology that will detect an estimated 75% of circulating virus subtypes from a targeted bird population and outlines future surveillance and research priorities that are needed to explore the influence of host and virus biodiversity on emergence and transmission.
Tang, Gong; Kong, Yuan; Chang, Chung-Chou Ho; Kong, Lan; Costantino, Joseph P
2012-01-01
In a phase III multi-center cancer clinical trial or a large public health study, sample size is predetermined to achieve desired power, and study participants are enrolled from tens or hundreds of participating institutions. As the accrual is closing to the target size, the coordinating data center needs to project the accrual closure date on the basis of the observed accrual pattern and notify the participating sites several weeks in advance. In the past, projections were simply based on some crude assessment, and conservative measures were incorporated in order to achieve the target accrual size. This approach often resulted in excessive accrual size and subsequently unnecessary financial burden on the study sponsors. Here we proposed a discrete-time Poisson process-based method to estimate the accrual rate at time of projection and subsequently the trial closure date. To ensure that target size would be reached with high confidence, we also proposed a conservative method for the closure date projection. The proposed method was illustrated through the analysis of the accrual data of the National Surgical Adjuvant Breast and Bowel Project trial B-38. The results showed that application of the proposed method could help to save considerable amount of expenditure in patient management without compromising the accrual goal in multi-center clinical trials. Copyright © 2012 John Wiley & Sons, Ltd.
Magnetic microscopic imaging with an optically pumped magnetometer and flux guides
Kim, Young Jin; Savukov, Igor Mykhaylovich; Huang, Jen -Huang; ...
2017-01-23
Here, by combining an optically pumped magnetometer (OPM) with flux guides (FGs) and by installing a sample platform on automated translation stages, we have implemented an ultra-sensitive FG-OPM scanning magnetic imaging system that is capable of detecting magnetic fields of ~20 pT with spatial resolution better than 300 μm (expected to reach ~10 pT sensitivity and ~100 μm spatial resolution with optimized FGs). As a demonstration of one possible application of the FG-OPM device, we conducted magnetic imaging of micron-size magnetic particles. Magnetic imaging of such particles, including nano-particles and clusters, is very important for many fields, especially for medicalmore » cancer diagnostics and biophysics applications. For rapid, precise magnetic imaging, we constructed an automatic scanning system, which holds and moves a target sample containing magnetic particles at a given stand-off distance from the FG tips. We show that the device was able to produce clear microscopic magnetic images of 10 μm-size magnetic particles. In addition, we also numerically investigated how the magnetic flux from a target sample at a given stand-off distance is transmitted to the OPM vapor cell.« less
Computed Tomography to Estimate the Representative Elementary Area for Soil Porosity Measurements
Borges, Jaqueline Aparecida Ribaski; Pires, Luiz Fernando; Belmont Pereira, André
2012-01-01
Computed tomography (CT) is a technique that provides images of different solid and porous materials. CT could be an ideal tool to study representative sizes of soil samples because of the noninvasive characteristic of this technique. The scrutiny of such representative elementary sizes (RESs) has been the target of attention of many researchers related to soil physics field owing to the strong relationship between physical properties and size of the soil sample. In the current work, data from gamma-ray CT were used to assess RES in measurements of soil porosity (ϕ). For statistical analysis, a study on the full width at a half maximum (FWHM) of the adjustment of distribution of ϕ at different areas (1.2 to 1162.8 mm2) selected inside of tomographic images was proposed herein. The results obtained point out that samples with a section area corresponding to at least 882.1 mm2 were the ones that provided representative values of ϕ for the studied Brazilian tropical soil. PMID:22666133
Michen, Benjamin; Geers, Christoph; Vanhecke, Dimitri; Endes, Carola; Rothen-Rutishauser, Barbara; Balog, Sandor; Petri-Fink, Alke
2015-01-01
Standard transmission electron microscopy nanoparticle sample preparation generally requires the complete removal of the suspending liquid. Drying often introduces artifacts, which can obscure the state of the dispersion prior to drying and preclude automated image analysis typically used to obtain number-weighted particle size distribution. Here we present a straightforward protocol for prevention of the onset of drying artifacts, thereby allowing the preservation of in-situ colloidal features of nanoparticles during TEM sample preparation. This is achieved by adding a suitable macromolecular agent to the suspension. Both research- and economically-relevant particles with high polydispersity and/or shape anisotropy are easily characterized following our approach (http://bsa.bionanomaterials.ch), which allows for rapid and quantitative classification in terms of dimensionality and size: features that are major targets of European Union recommendations and legislation. PMID:25965905
McClure, Leslie A; Szychowski, Jeff M; Benavente, Oscar; Hart, Robert G; Coffey, Christopher S
2016-10-01
The use of adaptive designs has been increasing in randomized clinical trials. Sample size re-estimation is a type of adaptation in which nuisance parameters are estimated at an interim point in the trial and the sample size re-computed based on these estimates. The Secondary Prevention of Small Subcortical Strokes study was a randomized clinical trial assessing the impact of single- versus dual-antiplatelet therapy and control of systolic blood pressure to a higher (130-149 mmHg) versus lower (<130 mmHg) target on recurrent stroke risk in a two-by-two factorial design. A sample size re-estimation was performed during the Secondary Prevention of Small Subcortical Strokes study resulting in an increase from the planned sample size of 2500-3020, and we sought to determine the impact of the sample size re-estimation on the study results. We assessed the results of the primary efficacy and safety analyses with the full 3020 patients and compared them to the results that would have been observed had randomization ended with 2500 patients. The primary efficacy outcome considered was recurrent stroke, and the primary safety outcomes were major bleeds and death. We computed incidence rates for the efficacy and safety outcomes and used Cox proportional hazards models to examine the hazard ratios for each of the two treatment interventions (i.e. the antiplatelet and blood pressure interventions). In the antiplatelet intervention, the hazard ratio was not materially modified by increasing the sample size, nor did the conclusions regarding the efficacy of mono versus dual-therapy change: there was no difference in the effect of dual- versus monotherapy on the risk of recurrent stroke hazard ratios (n = 3020 HR (95% confidence interval): 0.92 (0.72, 1.2), p = 0.48; n = 2500 HR (95% confidence interval): 1.0 (0.78, 1.3), p = 0.85). With respect to the blood pressure intervention, increasing the sample size resulted in less certainty in the results, as the hazard ratio for higher versus lower systolic blood pressure target approached, but did not achieve, statistical significance with the larger sample (n = 3020 HR (95% confidence interval): 0.81 (0.63, 1.0), p = 0.089; n = 2500 HR (95% confidence interval): 0.89 (0.68, 1.17), p = 0.40). The results from the safety analyses were similar to 3020 and 2500 patients for both study interventions. Other trial-related factors, such as contracts, finances, and study management, were impacted as well. Adaptive designs can have benefits in randomized clinical trials, but do not always result in significant findings. The impact of adaptive designs should be measured in terms of both trial results, as well as practical issues related to trial management. More post hoc analyses of study adaptations will lead to better understanding of the balance between the benefits and the costs. © The Author(s) 2016.
Motion mitigation for lung cancer patients treated with active scanning proton therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grassberger, Clemens, E-mail: Grassberger.Clemens@mgh.harvard.edu; Dowdell, Stephen; Sharp, Greg
2015-05-15
Purpose: Motion interplay can affect the tumor dose in scanned proton beam therapy. This study assesses the ability of rescanning and gating to mitigate interplay effects during lung treatments. Methods: The treatments of five lung cancer patients [48 Gy(RBE)/4fx] with varying tumor size (21.1–82.3 cm{sup 3}) and motion amplitude (2.9–30.6 mm) were simulated employing 4D Monte Carlo. The authors investigated two spot sizes (σ ∼ 12 and ∼3 mm), three rescanning techniques (layered, volumetric, breath-sampled volumetric) and respiratory gating with a 30% duty cycle. Results: For 4/5 patients, layered rescanning 6/2 times (for the small/large spot size) maintains equivalent uniformmore » dose within the target >98% for a single fraction. Breath sampling the timing of rescanning is ∼2 times more effective than the same number of continuous rescans. Volumetric rescanning is sensitive to synchronization effects, which was observed in 3/5 patients, though not for layered rescanning. For the large spot size, rescanning compared favorably with gating in terms of time requirements, i.e., 2x-rescanning is on average a factor ∼2.6 faster than gating for this scenario. For the small spot size however, 6x-rescanning takes on average 65% longer compared to gating. Rescanning has no effect on normal lung V{sub 20} and mean lung dose (MLD), though it reduces the maximum lung dose by on average 6.9 ± 2.4/16.7 ± 12.2 Gy(RBE) for the large and small spot sizes, respectively. Gating leads to a similar reduction in maximum dose and additionally reduces V{sub 20} and MLD. Breath-sampled rescanning is most successful in reducing the maximum dose to the normal lung. Conclusions: Both rescanning (2–6 times, depending on the beam size) as well as gating was able to mitigate interplay effects in the target for 4/5 patients studied. Layered rescanning is superior to volumetric rescanning, as the latter suffers from synchronization effects in 3/5 patients studied. Gating minimizes the irradiated volume of normal lung more efficiently, while breath-sampled rescanning is superior in reducing maximum doses to organs at risk.« less
78 FR 70015 - Proposed Information Collection; Comment Request; Large Pelagic Fishing Survey
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-22
...) target sample size from 10,780 to 15,900 interviews for Northeast and Southeast combined. Add up to five questions to the LPTS questionnaire. Add a non-response follow-up survey to the LPTS in the Southeast region... from 1,500 to 1,000 interviews. [[Page 70016
Experimental scheme and restoration algorithm of block compression sensing
NASA Astrophysics Data System (ADS)
Zhang, Linxia; Zhou, Qun; Ke, Jun
2018-01-01
Compressed Sensing (CS) can use the sparseness of a target to obtain its image with much less data than that defined by the Nyquist sampling theorem. In this paper, we study the hardware implementation of a block compression sensing system and its reconstruction algorithms. Different block sizes are used. Two algorithms, the orthogonal matching algorithm (OMP) and the full variation minimum algorithm (TV) are used to obtain good reconstructions. The influence of block size on reconstruction is also discussed.
NASA Astrophysics Data System (ADS)
Chen, Zi-Yu; Li, Jian-Feng; Yu, Yong; Wang, Jia-Xiang; Li, Xiao-Ya; Peng, Qi-Xian; Zhu, Wen-Jun
2012-11-01
The influences of lateral target size on hot electron production and electromagnetic pulse emission from laser interaction with metallic targets have been investigated. Particle-in-cell simulations at high laser intensities show that the yield of hot electrons tends to increase with lateral target size, because the larger surface area reduces the electrostatic field on the target, owing to its expansion along the target surface. At lower laser intensities and longer time scales, experimental data characterizing electromagnetic pulse emission as a function of lateral target size also show target-size effects. Charge separation and a larger target tending to have a lower target potential have both been observed. The increase in radiation strength and downshift in radiation frequency with increasing lateral target size can be interpreted using a simple model of the electrical capacity of the target.
X-ray microscopy using reflection targets based on SEM with tungsten filament
NASA Astrophysics Data System (ADS)
Liu, Junbiao; Ma, Yutian; Zhao, Weixia; Niu, Geng; Chu, Mingzhang; Yin, Bohua; Han, Li; Liu, Baodong
2016-10-01
X-ray MicroandNano imaging is developed based on the conventional x-ray tomography, it can not only provide nondestructive testing with higher resolution measurement, but also be used to examine the material or the structure with low atomic number and low density. The source with micro-focal spot size is one of the key components of x-ray MicroandNano imaging. The focused electron beam from SEM bombarding the metal target can generate x-ray with ultra-small size. It is convenient to set up x-ray microscopy based on SEM for laboratory use. This paper describes a new x-ray microscopy using reflection targets based on FEI Quanta600 SEM with tungsten filament. The flat panel detector is placed outside of the vacuum chamber with 300μm thickness Be-window to isolate vacuum from the air. A stage with 3 DOFs is added to adjust the positions of the target, the SEM's sample stage is used to move sample. And the shape of target is designed as cone with 60° half cone angle to get the maximum x-ray dosage. The attenuation coefficient of Bewindow for x-ray is about 25%. Finally, the line pair card is used to evaluate the resolution and the result shows that the resolution of the system can receive less than 750nm, when the acceleration voltage is 30keV, the beam current is 160nA, the SEM working distance is 5mm and the acquisition time of the detector is 60s.
Du, Feng; Yin, Yue; Qi, Yue; Zhang, Kan
2014-08-01
In the present study, we examined whether a peripheral size-singleton distractor that matches the target-distractor size relation can capture attention and disrupt central target identification. Three experiments consistently showed that a size singleton that matches the target-distractor size relation cannot capture attention when it appears outside of the attentional window, even though the same size singleton produces a cuing effect. In addition, a color singleton that matches the target color, instead of a size singleton that matches the target-distractor size relation, captures attention when it is outside of the attentional window. Thus, a size-relation-matched distractor is much weaker than a color-matched distractor in capturing attention and cannot capture attention when the distractor appears outside of the attentional window.
NASA Astrophysics Data System (ADS)
Dasher, D. H.; Lomax, T. J.; Bethe, A.; Jewett, S.; Hoberg, M.
2016-02-01
A regional probabilistic survey of 20 randomly selected stations, where water and sediments were sampled, was conducted over an area of Simpson Lagoon and Gwydyr Bay in the Beaufort Sea adjacent Prudhoe Bay, Alaska, in 2014. Sampling parameters included water column for temperature, salinity, dissolved oxygen, chlorophyll a, nutrients and sediments for macroinvertebrates, chemistry, i.e., trace metals and hydrocarbons, and grain size. The 2014 probabilistic survey design allows for inferences to be made of environmental status, for instance the spatial or aerial distribution of sediment trace metals within the design area sampled. Historically, since the 1970's a number of monitoring studies have been conducted in this estuary area using a targeted rather than regional probabilistic design. Targeted non-random designs were utilized to assess specific points of interest and cannot be used to make inferences to distributions of environmental parameters. Due to differences in the environmental monitoring objectives between probabilistic and targeted designs there has been limited assessment see if benefits exist to combining the two approaches. This study evaluates if a combined approach using the 2014 probabilistic survey sediment trace metal and macroinvertebrate results and historical targeted monitoring data can provide a new perspective on better understanding the environmental status of these estuaries.
A closer look at the size of the gaze-liking effect: a preregistered replication.
Tipples, Jason; Pecchinenda, Anna
2018-04-30
This study is a direct replication of gaze-liking effect using the same design, stimuli and procedure. The gaze-liking effect describes the tendency for people to rate objects as more likeable when they have recently seen a person repeatedly gaze toward rather than away from the object. However, as subsequent studies show considerable variability in the size of this effect, we sampled a larger number of participants (N = 98) than the original study (N = 24) to gain a more precise estimate of the gaze-liking effect size. Our results indicate a much smaller standardised effect size (d z = 0.02) than that of the original study (d z = 0.94). Our smaller effect size was not due to general insensitivity to eye-gaze effects because the same sample showed a clear (d z = 1.09) gaze-cuing effect - faster reaction times when eyes looked toward vs away from target objects. We discuss the implications of our findings for future studies wishing to study the gaze-liking effect.
Eduardoff, Mayra; Xavier, Catarina; Strobl, Christina; Casas-Vargas, Andrea; Parson, Walther
2017-01-01
The analysis of mitochondrial DNA (mtDNA) has proven useful in forensic genetics and ancient DNA (aDNA) studies, where specimens are often highly compromised and DNA quality and quantity are low. In forensic genetics, the mtDNA control region (CR) is commonly sequenced using established Sanger-type Sequencing (STS) protocols involving fragment sizes down to approximately 150 base pairs (bp). Recent developments include Massively Parallel Sequencing (MPS) of (multiplex) PCR-generated libraries using the same amplicon sizes. Molecular genetic studies on archaeological remains that harbor more degraded aDNA have pioneered alternative approaches to target mtDNA, such as capture hybridization and primer extension capture (PEC) methods followed by MPS. These assays target smaller mtDNA fragment sizes (down to 50 bp or less), and have proven to be substantially more successful in obtaining useful mtDNA sequences from these samples compared to electrophoretic methods. Here, we present the modification and optimization of a PEC method, earlier developed for sequencing the Neanderthal mitochondrial genome, with forensic applications in mind. Our approach was designed for a more sensitive enrichment of the mtDNA CR in a single tube assay and short laboratory turnaround times, thus complying with forensic practices. We characterized the method using sheared, high quantity mtDNA (six samples), and tested challenging forensic samples (n = 2) as well as compromised solid tissue samples (n = 15) up to 8 kyrs of age. The PEC MPS method produced reliable and plausible mtDNA haplotypes that were useful in the forensic context. It yielded plausible data in samples that did not provide results with STS and other MPS techniques. We addressed the issue of contamination by including four generations of negative controls, and discuss the results in the forensic context. We finally offer perspectives for future research to enable the validation and accreditation of the PEC MPS method for final implementation in forensic genetic laboratories. PMID:28934125
Hafeez, Mian A; Shivaramaiah, Srichaitanya; Dorsey, Kristi Moore; Ogedengbe, Mosun E; El-Sherry, Shiem; Whale, Julia; Cobean, Julie; Barta, John R
2015-05-01
Species-specific PCR primers targeting the mitochondrial cytochrome c oxidase subunit I (mtCOI) locus were generated that allow for the specific identification of the most common Eimeria species infecting turkeys (i.e., Eimeria adenoeides, Eimeria meleagrimitis, Eimeria gallopavonis, Eimeria meleagridis, Eimeria dispersa, and Eimeria innocua). PCR reaction chemistries were optimized with respect to divalent cation (MgCl2) and dNTP concentrations, as well as PCR cycling conditions (particularly anneal temperature for primers). Genomic DNA samples from single oocyst-derived lines of six Eimeria species were tested to establish specificity and sensitivity of these newly designed primer pairs. A mixed 60-ng total DNA sample containing 10 ng of each of the six Eimeria species was used as DNA template to demonstrate specific amplification of the correct product using each of the species-specific primer pairs. Ten nanograms of each of the five non-target Eimeria species was pooled to provide a non-target, control DNA sample suitable to test the specificity of each primer pair. The amplifications of the COI region with species-specific primer pairs from pooled samples yielded products of expected sizes (209 to 1,012 bp) and no amplification of non-target Eimeria sp. DNA was detected using the non-target, control DNA samples. These primer pairs specific for Eimeria spp. of turkeys did not amplify any of the seven Eimeria species infecting chickens. The newly developed PCR primers can be used as a diagnostic tool capable of specifically identifying six turkey Eimeria species; additionally, sequencing of the PCR amplification products yields sequence-based genotyping data suitable for identification and molecular phylogenetics.
NASA Astrophysics Data System (ADS)
Sooter, Letha J.; Stratis-Cullum, Dimitra N.; Zhang, Yanting; Daugherty, Patrick S.; Soh, H. Tom; Pellegrino, Paul; Stagliano, Nancy
2007-09-01
Immunochromatography is a rapid, reliable, and cost effective method of detecting biowarfare agents. The format is similar to that of an over-the-counter pregnancy test. A sample is applied to one end of a cassette and then a control line, and possibly a sample line, are visualized at the other end of the cassette. The test is based upon a sandwich assay. For the control, a line of Protein A is immobilized on the membrane. Gold nanoparticle bound IgG flows through the membrane and binds the Protein A, creating a visible line on the membrane. For the sample, one epitope is immobilized on the membrane and another epitope is attached to gold nanoparticles. The sample binds gold bound epitope, travels through the membrane, and binds membrane bound epitope. The two epitopes are not cross-reactive, therefore a sample line is only visible if the sample is present. In order to efficiently screen for binders to a sample target, a novel, Continuous Magnetic Activated Cell Sorter (CMACS) has been developed on a disposable, microfluidic platform. The CMACS chip quickly sorts E. coli peptide libraries for target binders with high affinity. Peptide libraries, are composed of approximately ten million bacteria, each displaying a different peptide on their surface. The target of interest is conjugated to a micrometer sized magnetic particle. After the library and the target are incubated together to allow binding, the mixture is applied to the CMACS chip. In the presence of patterned nickel and an external magnet, separation occurs of the bead-bound bacteria from the bulk material. The bead fraction is added to bacterial growth media where any attached E. coli grow and divide. These cells are cloned, sequenced, and the peptides are assayed for target binding affinity. As a proof-of-principle, assays were developed for human C-reactive protein. More defense relevant targets are currently being pursued.
Frequency Rates and Correlates of Contrapower Harassment in Higher Education
ERIC Educational Resources Information Center
DeSouza, Eros R.
2011-01-01
The current study investigated incivility, sexual harassment, and racial-ethnic harassment simultaneously when the targets were faculty members and the perpetrators were students (i.e., academic contrapower harassment; ACH). The sample constituted 257 faculty members (90% were White and 53% were women) from a medium-sized state university in the…
Problems and Limitations in Studies on Screening for Language Delay
ERIC Educational Resources Information Center
Eriksson, Marten; Westerlund, Monica; Miniscalco, Carmela
2010-01-01
This study discusses six common methodological limitations in screening for language delay (LD) as illustrated in 11 recent studies. The limitations are (1) whether the studies define a target population, (2) whether the recruitment procedure is unbiased, (3) attrition, (4) verification bias, (5) small sample size and (6) inconsistencies in choice…
Carbonate and silicate rock standards for cosmogenic 36Cl
NASA Astrophysics Data System (ADS)
Mechernich, Silke; Dunai, Tibor J.; Binnie, Steven A.; Goral, Tomasz; Heinze, Stefan; Dewald, Alfred; Benedetti, Lucilla; Schimmelpfennig, Irene; Phillips, Fred; Marrero, Shasta; Akif Sarıkaya, Mehmet; Gregory, Laura C.; Phillips, Richard J.; Wilcken, Klaus; Simon, Krista; Fink, David
2017-04-01
The number of studies using cosmogenic nuclides has increased multi-fold during the last two decades and several new dedicated target preparation laboratories and Accelerator Mass Spectrometry (AMS) facilities have been established. Each facility uses sample preparation and AMS measurement techniques particular to their needs. It is thus desirable to have community-accepted and well characterized rock standards available for routine processing using identical target preparation procedures and AMS measurement methods as carried out for samples of unknown cosmogenic nuclide concentrations. The usefulness of such natural standards is that they allow more rigorous quality control, for example, the long-term reproducibility of results and hence measurement precision, or the testing of new target preparation techniques or newly established laboratories. This is particularly pertinent for in-situ 36Cl studies due to the multiplicity of 36Cl production pathways that requires a variety of elemental and isotopic determinations in addition to AMS 36Cl assay. We have prepared two natural rock samples (denoted CoCal-N and CoFsp-N) to serve as standard material for in situ-produced cosmogenic 36Cl analysis. The sample CoCal-N is a pure limestone prepared from pebbles in a Namibian lag deposit, while the alkali-feldspar CoFsp-N is derived from a single crystal in a Namibian pegmatite. The sample preparation took place at the University of Cologne, where first any impurities were removed manually from both standards. CoCal-N was leached in 10 % HNO3 to remove the outer rim, and afterwards crushed and sieved to 250-500 μm size fractions. CoFsp-N was crushed, sieved to 250-500 μm size fractions and then leached in 1% HNO3 / 1% HF until 20% of the sample were removed. Both standards were thoroughly mixed using a rotating sample splitter before being distributed to other laboratories. To date, a total of 28 CoCal-N aliquots (between 2 and 16 aliquots per facility) and 31 CoFsp-N aliquots (between 2 and 20 aliquots per facility) have been analyzed by six target preparation laboratories employing five different AMS facilities. Currently, the internal reproducibility of the measurements underlines the homogeneity of both standards. The inter-laboratory comparison suggests low over-dispersion. Further measurements are pending and should allow meaningful statistical analysis. Both standard materials are freely available and can be obtained from Tibor Dunai tdunai@uni-koeln.de).
NASA Astrophysics Data System (ADS)
Tatsumi, Eri; Sugita, Seiji
2018-01-01
Remote sensing observations made by the spacecraft Hayabusa provided the first direct evidence of a rubble-pile asteroid: 25143 Itokawa. Itokawa was found to have a surface structure very different from other explored asteroids; covered with coarse pebbles and boulders ranging at least from cm to meter size. The cumulative size distribution of small circular depressions on Itokawa, most of which may be of impact origin, has a significantly shallower slope than that on the Moon; small craters are highly depleted on Itokawa compared to the Moon. This deficiency of small circular depressions and other features, such as clustered fragments and pits on boulders, suggest that the boulders on Itokawa might behave like armor, preventing crater formation: the ;armoring effect;. This might contribute to the low number density of small crater candidates. In this study, the cratering efficiency reduction due to coarse-grained targets was investigated based on impact experiments at velocities ranging from ∼ 70 m/s to ∼ 6 km/s using two vertical gas gun ranges. We propose a scaling law extended for cratering on coarse-grained targets (i.e., target grain size ≳ projectile size). We have found that the crater efficiency reduction is caused by energy dissipation at the collision site where momentum is transferred from the impactor to the first-contact target grain, and that the armoring effect can be classified into three regimes: (1) gravity scaled regime, (2) reduced size crater regime, or (3) no apparent crater regime, depending on the ratio of the impactor size to the target grain size and the ratio of the impactor kinetic energy to the disruption energy of a target grain. We found that the shallow slope of the circular depressions on Itokawa cannot be accounted for by this new scaling law, suggesting that obliteration processes, such as regolith convection and migration, play a greater role in the depletion of circular depressions on Itokawa. Based on the new extended scaling law, we found that the crater retention age on Itokawa is 3-33 Myr in the main belt, which is in good agreement with the cosmic-ray-exposure ages for returned samples from Itokawa which may reflect the age of material a few meters beneath the surface. These ages strongly suggest that the global resurfacing that reset the 1-10 m deep surface layer may have occurred in the main belt long after the possible catastrophic disruption of a rigid parent body of Itokawa suggested by Ar degassing age ( ∼ 1.3 Gyr).
Correlates of self worth and body size dissatisfaction among obese Latino youth
Mirza, Nazrat M; Mackey, Eleanor Race; Armstrong, Bridget; Jaramillo, Ana; Palmer, Matilde M
2011-01-01
The current study examined self-worth and body size dissatisfaction, and their association with maternal acculturation among obese Latino youth enrolled in a community-based obesity intervention program. Upon entry to the program, a sample of 113 participants reported global self-worth comparable to general population norms, but lower athletic competence and perception of physical appearance. Interestingly, body size dissatisfaction was more prevalent among younger respondents. Youth body size dissatisfaction was associated with less acculturated mothers and higher maternal dissatisfaction with their child's body size. By contrast, although global self-worth was significantly related to body dissatisfaction, it was not influenced by mothers’ acculturation or dissatisfaction with their own or their child’s body size. Obesity intervention programs targeted to Latino youth need to address self-worth concerns among the youth as well as addressing maternal dissatisfaction with their children’s body size. PMID:21354881
A Bayesian Perspective on the Reproducibility Project: Psychology
Etz, Alexander; Vandekerckhove, Joachim
2016-01-01
We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors—a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis—for a large subset (N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable. PMID:26919473
A Bayesian Perspective on the Reproducibility Project: Psychology.
Etz, Alexander; Vandekerckhove, Joachim
2016-01-01
We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors-a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis-for a large subset (N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable.
An Immunization Strategy for Hidden Populations.
Chen, Saran; Lu, Xin
2017-06-12
Hidden populations, such as injecting drug users (IDUs), sex workers (SWs) and men who have sex with men (MSM), are considered at high risk of contracting and transmitting infectious diseases such as AIDS, gonorrhea, syphilis etc. However, public health interventions to such groups are prohibited due to strong privacy concerns and lack of global information, which is a necessity for traditional strategies such as targeted immunization and acquaintance immunization. In this study, we introduce an innovative intervention strategy to be used in combination with a sampling approach that is widely used for hidden populations, Respondent-driven Sampling (RDS). The RDS strategy is implemented in two steps: First, RDS is used to estimate the average degree (personal network size) and degree distribution of the target population with sample data. Second, a cut-off threshold is calculated and used to screen the respondents to be immunized. Simulations on model networks and real-world networks reveal that the efficiency of the RDS strategy is close to that of the targeted strategy. As the new strategy can be implemented with the RDS sampling process, it provides a cost-efficient and feasible approach for disease intervention and control for hidden populations.
Random vs. systematic sampling from administrative databases involving human subjects.
Hagino, C; Lo, R J
1998-09-01
Two sampling techniques, simple random sampling (SRS) and systematic sampling (SS), were compared to determine whether they yield similar and accurate distributions for the following four factors: age, gender, geographic location and years in practice. Any point estimate within 7 yr or 7 percentage points of its reference standard (SRS or the entire data set, i.e., the target population) was considered "acceptably similar" to the reference standard. The sampling frame was from the entire membership database of the Canadian Chiropractic Association. The two sampling methods were tested using eight different sample sizes of n (50, 100, 150, 200, 250, 300, 500, 800). From the profile/characteristics, summaries of four known factors [gender, average age, number (%) of chiropractors in each province and years in practice], between- and within-methods chi 2 tests and unpaired t tests were performed to determine whether any of the differences [descriptively greater than 7% or 7 yr] were also statistically significant. The strengths of the agreements between the provincial distributions were quantified by calculating the percent agreements for each (provincial pairwise-comparison methods). Any percent agreement less than 70% was judged to be unacceptable. Our assessments of the two sampling methods (SRS and SS) for the different sample sizes tested suggest that SRS and SS yielded acceptably similar results. Both methods started to yield "correct" sample profiles at approximately the same sample size (n > 200). SS is not only convenient, it can be recommended for sampling from large databases in which the data are listed without any inherent order biases other than alphabetical listing by surname.
Vinson, M.R.; Budy, P.
2011-01-01
We compared sources of variability and cost in paired stomach content and stable isotope samples from three salmonid species collected in September 2001–2005 and describe the relative information provided by each method in terms of measuring diet overlap and food web study design. Based on diet analyses, diet overlap among brown trout, rainbow trout, and mountain whitefish was high, and we observed little variation in diets among years. In contrast, for sample sizes n ≥ 25, 95% confidence interval (CI) around mean δ15Ν and δ13C for the three target species did not overlap, and species, year, and fish size effects were significantly different, implying that these species likely consumed similar prey but in different proportions. Stable isotope processing costs were US$12 per sample, while stomach content analysis costs averaged US$25.49 ± $2.91 (95% CI) and ranged from US$1.50 for an empty stomach to US$291.50 for a sample with 2330 items. Precision in both δ15Ν and δ13C and mean diet overlap values based on stomach contents increased considerably up to a sample size of n = 10 and plateaued around n = 25, with little further increase in precision.
Self-objectification and disordered eating: A meta-analysis.
Schaefer, Lauren M; Thompson, J Kevin
2018-06-01
Objectification theory posits that self-objectification increases risk for disordered eating. The current study sought to examine the relationship between self-objectification and disordered eating using meta-analytic techniques. Data from 53 cross-sectional studies (73 effect sizes) revealed a significant moderate positive overall effect (r = .39), which was moderated by gender, ethnicity, sexual orientation, and measurement of self-objectification. Specifically, larger effect sizes were associated with female samples and the Objectified Body Consciousness Scale. Effect sizes were smaller among heterosexual men and African American samples. Age, body mass index, country of origin, measurement of disordered eating, sample type and publication type were not significant moderators. Overall, results from the first meta-analysis to examine the relationship between self-objectification and disordered eating provide support for one of the major tenets of objectification theory and suggest that self-objectification may be a meaningful target in eating disorder interventions, though further work is needed to establish temporal and causal relationships. Findings highlight current gaps in the literature (e.g., limited representation of males, and ethnic and sexual minorities) with implications for guiding future research. © 2018 Wiley Periodicals, Inc.
Kondrashova, Olga; Love, Clare J.; Lunke, Sebastian; Hsu, Arthur L.; Waring, Paul M.; Taylor, Graham R.
2015-01-01
Whilst next generation sequencing can report point mutations in fixed tissue tumour samples reliably, the accurate determination of copy number is more challenging. The conventional Multiplex Ligation-dependent Probe Amplification (MLPA) assay is an effective tool for measurement of gene dosage, but is restricted to around 50 targets due to size resolution of the MLPA probes. By switching from a size-resolved format, to a sequence-resolved format we developed a scalable, high-throughput, quantitative assay. MLPA-seq is capable of detecting deletions, duplications, and amplifications in as little as 5ng of genomic DNA, including from formalin-fixed paraffin-embedded (FFPE) tumour samples. We show that this method can detect BRCA1, BRCA2, ERBB2 and CCNE1 copy number changes in DNA extracted from snap-frozen and FFPE tumour tissue, with 100% sensitivity and >99.5% specificity. PMID:26569395
van Lieshout, Jan; Grol, Richard; Campbell, Stephen; Falcoff, Hector; Capell, Eva Frigola; Glehr, Mathias; Goldfracht, Margalit; Kumpusalo, Esko; Künzi, Beat; Ludt, Sabine; Petek, Davorina; Vanderstighelen, Veerle; Wensing, Michel
2012-10-05
Primary care has an important role in cardiovascular risk management (CVRM) and a minimum size of scale of primary care practices may be needed for efficient delivery of CVRM . We examined CVRM in patients with coronary heart disease (CHD) in primary care and explored the impact of practice size. In an observational study in 8 countries we sampled CHD patients in primary care practices and collected data from electronic patient records. Practice samples were stratified according to practice size and urbanisation; patients were selected using coded diagnoses when available. CVRM was measured on the basis of internationally validated quality indicators. In the analyses practice size was defined in terms of number of patients registered of visiting the practice. We performed multilevel regression analyses controlling for patient age and sex. We included 181 practices (63% of the number targeted). Two countries included a convenience sample of practices. Data from 2960 CHD patients were available. Some countries used methods supplemental to coded diagnoses or other inclusion methods introducing potential inclusion bias. We found substantial variation on all CVRM indicators across practices and countries. We computed aggregated practice scores as percentage of patients with a positive outcome. Rates of risk factor recording varied from 55% for physical activity as the mean practice score across all practices (sd 32%) to 94% (sd 10%) for blood pressure. Rates for reaching treatment targets for systolic blood pressure, diastolic blood pressure and LDL cholesterol were 46% (sd 21%), 86% (sd 12%) and 48% (sd 22%) respectively. Rates for providing recommended cholesterol lowering and antiplatelet drugs were around 80%, and 70% received influenza vaccination. Practice size was not associated to indicator scores with one exception: in Slovenia larger practices performed better. Variation was more related to differences between practices than between countries. CVRM measured by quality indicators showed wide variation within and between countries and possibly leaves room for improvement in all countries involved. Few associations of performance scores with practice size were found.
NASA Astrophysics Data System (ADS)
Li, Miao; Lin, Zaiping; Long, Yunli; An, Wei; Zhou, Yiyu
2016-05-01
The high variability of target size makes small target detection in Infrared Search and Track (IRST) a challenging task. A joint detection and tracking method based on block-wise sparse decomposition is proposed to address this problem. For detection, the infrared image is divided into overlapped blocks, and each block is weighted on the local image complexity and target existence probabilities. Target-background decomposition is solved by block-wise inexact augmented Lagrange multipliers. For tracking, label multi-Bernoulli (LMB) tracker tracks multiple targets taking the result of single-frame detection as input, and provides corresponding target existence probabilities for detection. Unlike fixed-size methods, the proposed method can accommodate size-varying targets, due to no special assumption for the size and shape of small targets. Because of exact decomposition, classical target measurements are extended and additional direction information is provided to improve tracking performance. The experimental results show that the proposed method can effectively suppress background clutters, detect and track size-varying targets in infrared images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aznar, Alexandra; Day, Megan; Doris, Elizabeth
The report analyzes and presents information learned from a sample of 20 cities across the United States, from New York City to Park City, Utah, including a diverse sample of population size, utility type, region, annual greenhouse gas reduction targets, vehicle use, and median household income. The report compares climate, sustainability, and energy plans to better understand where cities are taking energy-related actions and how they are measuring impacts. Some common energy-related goals focus on reducing city-wide carbon emissions, improving energy efficiency across sectors, increasing renewable energy, and increasing biking and walking.
Lachin, John M.; McGee, Paula L.; Greenbaum, Carla J.; Palmer, Jerry; Gottlieb, Peter; Skyler, Jay
2011-01-01
Preservation of -cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(), log(+1) and square-root transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8–12 years of age, adolescents (13–17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13–17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(+1) and transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately evaluate the sample size for studies of new agents to preserve C-peptide levels in newly diagnosed type 1 diabetes. PMID:22102862
Lachin, John M; McGee, Paula L; Greenbaum, Carla J; Palmer, Jerry; Pescovitz, Mark D; Gottlieb, Peter; Skyler, Jay
2011-01-01
Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(x), log(x+1) and square-root (√x) transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1) and √x transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately evaluate the sample size for studies of new agents to preserve C-peptide levels in newly diagnosed type 1 diabetes.
Daaboul, George G; Lopez, Carlos A; Chinnala, Jyothsna; Goldberg, Bennett B; Connor, John H; Ünlü, M Selim
2014-06-24
Rapid, sensitive, and direct label-free capture and characterization of nanoparticles from complex media such as blood or serum will broadly impact medicine and the life sciences. We demonstrate identification of virus particles in complex samples for replication-competent wild-type vesicular stomatitis virus (VSV), defective VSV, and Ebola- and Marburg-pseudotyped VSV with high sensitivity and specificity. Size discrimination of the imaged nanoparticles (virions) allows differentiation between modified viruses having different genome lengths and facilitates a reduction in the counting of nonspecifically bound particles to achieve a limit-of-detection (LOD) of 5 × 10(3) pfu/mL for the Ebola and Marburg VSV pseudotypes. We demonstrate the simultaneous detection of multiple viruses in a single sample (composed of serum or whole blood) for screening applications and uncompromised detection capabilities in samples contaminated with high levels of bacteria. By employing affinity-based capture, size discrimination, and a "digital" detection scheme to count single virus particles, we show that a robust and sensitive virus/nanoparticle sensing assay can be established for targets in complex samples. The nanoparticle microscopy system is termed the Single Particle Interferometric Reflectance Imaging Sensor (SP-IRIS) and is capable of high-throughput and rapid sizing of large numbers of biological nanoparticles on an antibody microarray for research and diagnostic applications.
Wareham, K J; Hyde, R M; Grindlay, D; Brennan, M L; Dean, R S
2017-10-04
Randomised controlled trials (RCTs) are a key component of the veterinary evidence base. Sample sizes and defined outcome measures are crucial components of RCTs. To describe the sample size and number of outcome measures of veterinary RCTs either funded by the pharmaceutical industry or not, published in 2011. A structured search of PubMed identified RCTs examining the efficacy of pharmaceutical interventions. Number of outcome measures, number of animals enrolled per trial, whether a primary outcome was identified, and the presence of a sample size calculation were extracted from the RCTs. The source of funding was identified for each trial and groups compared on the above parameters. Literature searches returned 972 papers; 86 papers comprising 126 individual trials were analysed. The median number of outcomes per trial was 5.0; there were no significant differences across funding groups (p = 0.133). The median number of animals enrolled per trial was 30.0; this was similar across funding groups (p = 0.302). A primary outcome was identified in 40.5% of trials and was significantly more likely to be stated in trials funded by a pharmaceutical company. A very low percentage of trials reported a sample size calculation (14.3%). Failure to report primary outcomes, justify sample sizes and the reporting of multiple outcome measures was a common feature in all of the clinical trials examined in this study. It is possible some of these factors may be affected by the source of funding of the studies, but the influence of funding needs to be explored with a larger number of trials. Some veterinary RCTs provide a weak evidence base and targeted strategies are required to improve the quality of veterinary RCTs to ensure there is reliable evidence on which to base clinical decisions.
Hanson, E; Ingold, S; Haas, C; Ballantyne, J
2018-05-01
The recovery of a DNA profile from the perpetrator or victim in criminal investigations can provide valuable 'source level' information for investigators. However, a DNA profile does not reveal the circumstances by which biological material was transferred. Some contextual information can be obtained by a determination of the tissue or fluid source of origin of the biological material as it is potentially indicative of some behavioral activity on behalf of the individual that resulted in its transfer from the body. Here, we sought to improve upon established RNA based methods for body fluid identification by developing a targeted multiplexed next generation mRNA sequencing assay comprising a panel of approximately equal sized gene amplicons. The multiplexed biomarker panel includes several highly specific gene targets with the necessary specificity to definitively identify most forensically relevant biological fluids and tissues (blood, semen, saliva, vaginal secretions, menstrual blood and skin). In developing the biomarker panel we evaluated 66 gene targets, with a progressive iteration of testing target combinations that exhibited optimal sensitivity and specificity using a training set of forensically relevant body fluid samples. The current assay comprises 33 targets: 6 blood, 6 semen, 6 saliva, 4 vaginal secretions, 5 menstrual blood and 6 skin markers. We demonstrate the sensitivity and specificity of the assay and the ability to identify body fluids in single source and admixed stains. A 16 sample blind test was carried out by one lab with samples provided by the other participating lab. The blinded lab correctly identified the body fluids present in 15 of the samples with the major component identified in the 16th. Various classification methods are being investigated to permit inference of the body fluid/tissue in dried physiological stains. These include the percentage of reads in a sample that are due to each of the 6 tissues/body fluids tested and inter-sample differential gene expression revealed by agglomerative hierarchical clustering. Copyright © 2018 Elsevier B.V. All rights reserved.
Emerging Answers: Research Findings on Programs To Reduce Teen Pregnancy.
ERIC Educational Resources Information Center
Kirby, Douglas
This report summarizes three bodies of research on teenage pregnancy and programs to reduce the risk of teenage pregnancy. Studies included in this report were completed in 1980 or later, conducted in the United States or Canada, targeted adolescents, employed an experimental or quasi-experimental design, had a sample size of at least 100 in the…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-07
... Clearance for Survey Research Studies. Revision to burden hours may be needed due to changes in the size of the target population, sampling design, and/or questionnaire length. DATES: Comments on this notice... Survey Research Studies. OMB Control Number: 0535-0248. Type of Request: To revise and extend a currently...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-19
... Clearance for Survey Research Studies. Revision to burden hours will be needed due to changes in the size of the target population, sampling design, and/or questionnaire length. DATES: Comments on this notice... Survey Research Studies. OMB Control Number: 0535-0248. Type of Request: To revise and extend a currently...
High-volume manufacturing device overlay process control
NASA Astrophysics Data System (ADS)
Lee, Honggoo; Han, Sangjun; Woo, Jaeson; Lee, DongYoung; Song, ChangRock; Heo, Hoyoung; Brinster, Irina; Choi, DongSub; Robinson, John C.
2017-03-01
Overlay control based on DI metrology of optical targets has been the primary basis for run-to-run process control for many years. In previous work we described a scenario where optical overlay metrology is performed on metrology targets on a high frequency basis including every lot (or most lots) at DI. SEM based FI metrology is performed ondevice in-die as-etched on an infrequent basis. Hybrid control schemes of this type have been in use for many process nodes. What is new is the relative size of the NZO as compared to the overlay spec, and the need to find more comprehensive solutions to characterize and control the size and variability of NZO at the 1x nm node: sampling, modeling, temporal frequency and control aspects, as well as trade-offs between SEM throughput and accuracy.
Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs
NASA Astrophysics Data System (ADS)
Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.
2016-07-01
Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and number of points, varies with the abundance, size and distributional pattern of target biota. Therefore, we advocate either the incorporation of prior knowledge or the use of baseline surveys to establish key properties of intended target biota in the initial stages of monitoring programs.
Study of vesicle size distribution dependence on pH value based on nanopore resistive pulse method
NASA Astrophysics Data System (ADS)
Lin, Yuqing; Rudzevich, Yauheni; Wearne, Adam; Lumpkin, Daniel; Morales, Joselyn; Nemec, Kathleen; Tatulian, Suren; Lupan, Oleg; Chow, Lee
2013-03-01
Vesicles are low-micron to sub-micron spheres formed by a lipid bilayer shell and serve as potential vehicles for drug delivery. The size of vesicle is proposed to be one of the instrumental variables affecting delivery efficiency since the size is correlated to factors like circulation and residence time in blood, the rate for cell endocytosis, and efficiency in cell targeting. In this work, we demonstrate accessible and reliable detection and size distribution measurement employing a glass nanopore device based on the resistive pulse method. This novel method enables us to investigate the size distribution dependence of pH difference across the membrane of vesicles with very small sample volume and rapid speed. This provides useful information for optimizing the efficiency of drug delivery in a pH sensitive environment.
Joint inversion of NMR and SIP data to estimate pore size distribution of geomaterials
NASA Astrophysics Data System (ADS)
Niu, Qifei; Zhang, Chi
2018-03-01
There are growing interests in using geophysical tools to characterize the microstructure of geomaterials because of the non-invasive nature and the applicability in field. In these applications, multiple types of geophysical data sets are usually processed separately, which may be inadequate to constrain the key feature of target variables. Therefore, simultaneous processing of multiple data sets could potentially improve the resolution. In this study, we propose a method to estimate pore size distribution by joint inversion of nuclear magnetic resonance (NMR) T2 relaxation and spectral induced polarization (SIP) spectra. The petrophysical relation between NMR T2 relaxation time and SIP relaxation time is incorporated in a nonlinear least squares problem formulation, which is solved using Gauss-Newton method. The joint inversion scheme is applied to a synthetic sample and a Berea sandstone sample. The jointly estimated pore size distributions are very close to the true model and results from other experimental method. Even when the knowledge of the petrophysical models of the sample is incomplete, the joint inversion can still capture the main features of the pore size distribution of the samples, including the general shape and relative peak positions of the distribution curves. It is also found from the numerical example that the surface relaxivity of the sample could be extracted with the joint inversion of NMR and SIP data if the diffusion coefficient of the ions in the electrical double layer is known. Comparing to individual inversions, the joint inversion could improve the resolution of the estimated pore size distribution because of the addition of extra data sets. The proposed approach might constitute a first step towards a comprehensive joint inversion that can extract the full pore geometry information of a geomaterial from NMR and SIP data.
An Archer's Perceived Form Scales the "Hitableness" of Archery Targets
ERIC Educational Resources Information Center
Lee, Yang; Lee, Sih; Carello, Claudia; Turvey, M. T.
2012-01-01
For skills that involve hitting a target, subsequent judgments of target size correlate with prior success in hitting that target. We used an archery context to examine the judgment-success relationship with varied target sizes in the absence of explicit knowledge of results. Competitive archers shot at targets 50 m away that varied in size among…
Warner, David M.; Claramunt, Randall M.; Schaeffer, Jeffrey S.; Yule, Daniel L.; Hrabik, Tom R.; Peintka, Bernie; Rudstam, Lars G.; Holuszko, Jeffrey D.; O'Brien, Timothy P.
2012-01-01
Because it is not possible to identify species with echosounders alone, trawling is widely used as a method for collecting species and size composition data for allocating acoustic fish density estimates to species or size groups. In the Laurentian Great Lakes, data from midwater trawls are commonly used for such allocations. However, there are no rules for how much midwater trawling effort is required to adequately describe species and size composition of the pelagic fish communities in these lakes, so the balance between acoustic sampling effort and trawling effort has been unguided. We used midwater trawl data collected between 1986 and 2008 in lakes Michigan and Huron and a variety of analytical techniques to develop guidance for appropriate levels of trawl effort. We used multivariate regression trees and re-sampling techniques to i. identify factors that influence species and size composition of the pelagic fish communities in these lakes, ii. identify stratification schemes for the two lakes, iii. determine if there was a relationship between uncertainty in catch composition and the number of tows made, and iv. predict the number of tows required to reach desired uncertainty targets. We found that depth occupied by fish below the surface was the most influential explanatory variable. Catch composition varied between lakes at depths <38.5 m below the surface, but not at depths ≥38.5 m below the surface. Year, latitude, and bottom depth influenced catch composition in the near-surface waters of Lake Michigan, while only year was important for Lake Huron surface waters. There was an inverse relationship between RSE [relative standard error = 100 × (SE/mean)] and the number of tows made for the proportions of the different size and species groups. We found for the fifth (Lake Huron) and sixth (Lake Michigan) largest lakes in the world, 15–35 tows were adequate to achieve target RSEs (15% and 30%) for ubiquitous species, but rarer species required much higher, and at times, impractical effort levels to reach these targets.
[The tactic of targeting the parietal pleura for controlling malignant pleural effusion].
Ohta, Yasuhiko
2008-01-01
Based on a hypothesis that the most effective target area for controlling malignant pleural effusion is the parietal pleura, the author has selectively carried out the multimodality treatment with limited operations combined with parietal pleurectomy (PL) followed by paclitaxel administered by 24-hour intrathoracic infusion and systemic chemotherapy. Seven patients with carcinomatous pleuritis were enrolled in the study. During a median follow-up period of 22 months, malignant effusion was controlled successfully in all patients. Although the imbalance on assessment and small sample size render the results inconclusive, the interim results presented here suggest that the tactic of targeting PL warrants further study in a less-invasive manner.
Rotary target method to prepare thin films of CdS/SiO 2 by pulsed laser deposition
NASA Astrophysics Data System (ADS)
Wang, H.; Zhu, Y.; Ong, P. P.
2000-12-01
Thin films of CdS-doped SiO 2 glass were prepared by using the conventional pulsed laser deposition (PLD) technique. The laser target consisted of a specially constructed rotary wheel which provided easy control of the exposure-area ratio to expose alternately the two materials to the laser beam. The physical target assembly avoided the potential complications inherent in chemically mixed targets such as in the sol-gel method. Time-of-flight (TOF) spectra confirmed the existence of the SiO 2 and CdS components in the thin-film samples so produced. X-ray diffraction (XRD) and atomic force microscopy(AFM) results showed the different sizes and structures of the as-deposited and annealed films. The wurtzite phase of CdS was found in the 600 oC-annealed sample, while the as-deposited film showed a cubic-hexagonal mixed structure. In the corresponding PL (photoluminescence) spectra, a red shift of the CdS band edge emission was found, which may be a result of the interaction between the CdS nanocrystallite and SiO 2 at their interface.
Validation of PCR methods for quantitation of genetically modified plants in food.
Hübner, P; Waiblinger, H U; Pietsch, K; Brodmann, P
2001-01-01
For enforcement of the recently introduced labeling threshold for genetically modified organisms (GMOs) in food ingredients, quantitative detection methods such as quantitative competitive (QC-PCR) and real-time PCR are applied by official food control laboratories. The experiences of 3 European food control laboratories in validating such methods were compared to describe realistic performance characteristics of quantitative PCR detection methods. The limit of quantitation (LOQ) of GMO-specific, real-time PCR was experimentally determined to reach 30-50 target molecules, which is close to theoretical prediction. Starting PCR with 200 ng genomic plant DNA, the LOQ depends primarily on the genome size of the target plant and ranges from 0.02% for rice to 0.7% for wheat. The precision of quantitative PCR detection methods, expressed as relative standard deviation (RSD), varied from 10 to 30%. Using Bt176 corn containing test samples and applying Bt176 specific QC-PCR, mean values deviated from true values by -7to 18%, with an average of 2+/-10%. Ruggedness of real-time PCR detection methods was assessed in an interlaboratory study analyzing commercial, homogeneous food samples. Roundup Ready soybean DNA contents were determined in the range of 0.3 to 36%, relative to soybean DNA, with RSDs of about 25%. Taking the precision of quantitative PCR detection methods into account, suitable sample plans and sample sizes for GMO analysis are suggested. Because quantitative GMO detection methods measure GMO contents of samples in relation to reference material (calibrants), high priority must be given to international agreements and standardization on certified reference materials.
A Model Based Approach to Sample Size Estimation in Recent Onset Type 1 Diabetes
Bundy, Brian; Krischer, Jeffrey P.
2016-01-01
The area under the curve C-peptide following a 2-hour mixed meal tolerance test from 481 individuals enrolled on 5 prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrollment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in Observed vs. Expected calculations to estimate the presumption of benefit in ongoing trials. PMID:26991448
Visual detection following retinal damage: predictions of an inhomogeneous retino-cortical model
NASA Astrophysics Data System (ADS)
Arnow, Thomas L.; Geisler, Wilson S.
1996-04-01
A model of human visual detection performance has been developed, based on available anatomical and physiological data for the primate visual system. The inhomogeneous retino- cortical (IRC) model computes detection thresholds by comparing simulated neural responses to target patterns with responses to a uniform background of the same luminance. The model incorporates human ganglion cell sampling distributions; macaque monkey ganglion cell receptive field properties; macaque cortical cell contrast nonlinearities; and a optical decision rule based on ideal observer theory. Spatial receptive field properties of cortical neurons were not included. Two parameters were allowed to vary while minimizing the squared error between predicted and observed thresholds. One parameter was decision efficiency, the other was the relative strength of the ganglion-cell center and surround. The latter was only allowed to vary within a small range consistent with known physiology. Contrast sensitivity was measured for sinewave gratings as a function of spatial frequency, target size and eccentricity. Contrast sensitivity was also measured for an airplane target as a function of target size, with and without artificial scotomas. The results of these experiments, as well as contrast sensitivity data from the literature were compared to predictions of the IRC model. Predictions were reasonably good for grating and airplane targets.
Yang, Liu; Wang, Zhihua; Deng, Yuliang; Li, Yan; Wei, Wei; Shi, Qihui
2016-11-15
Circulating tumor cells (CTCs) shed from tumor sites and represent the molecular characteristics of the tumor. Besides genetic and transcriptional characterization, it is important to profile a panel of proteins with single-cell precision for resolving CTCs' phenotype, organ-of-origin, and drug targets. We describe a new technology that enables profiling multiple protein markers of extraordinarily rare tumor cells at the single-cell level. This technology integrates a microchip consisting of 15000 60 pL-sized microwells and a novel beads-on-barcode antibody microarray (BOBarray). The BOBarray allows for multiplexed protein detection by assigning two independent identifiers (bead size and fluorescent color) of the beads to each protein. Four bead sizes (1.75, 3, 4.5, and 6 μm) and three colors (blue, green, and yellow) are utilized to encode up to 12 different proteins. The miniaturized BOBarray can fit an array of 60 pL-sized microwells that isolate single cells for cell lysis and the subsequent detection of protein markers. An enclosed 60 pL-sized microchamber defines a high concentration of proteins released from lysed single cells, leading to single-cell resolution of protein detection. The protein markers assayed in this study include organ-specific markers and drug targets that help to characterize the organ-of-origin and drug targets of isolated rare tumor cells from blood samples. This new approach enables handling a very small number of cells and achieves single-cell, multiplexed protein detection without loss of rare but clinically important tumor cells.
Piot, Bram; Navin, Deepa; Krishnan, Nattu; Bhardwaj, Ashish; Sharma, Vivek; Marjara, Pritpal
2010-01-01
Objectives This study reports on the results of a large-scale targeted condom social marketing campaign in and around areas where female sex workers are present. The paper also describes the method that was used for the routine monitoring of condom availability in these sites. Methods The lot quality assurance sampling (LQAS) method was used for the assessment of the geographical coverage and quality of coverage of condoms in target areas in four states and along selected national highways in India, as part of Avahan, the India AIDS initiative. Results A significant general increase in condom availability was observed in the intervention area between 2005 and 2008. High coverage rates were gradually achieved through an extensive network of pharmacies and particularly of non-traditional outlets, whereas traditional outlets were instrumental in providing large volumes of condoms. Conclusion LQAS is seen as a valuable tool for the routine monitoring of the geographical coverage and of the quality of delivery systems of condoms and of health products and services in general. With a relatively small sample size, easy data collection procedures and simple analytical methods, it was possible to inform decision-makers regularly on progress towards coverage targets. PMID:20167732
Piot, Bram; Mukherjee, Amajit; Navin, Deepa; Krishnan, Nattu; Bhardwaj, Ashish; Sharma, Vivek; Marjara, Pritpal
2010-02-01
This study reports on the results of a large-scale targeted condom social marketing campaign in and around areas where female sex workers are present. The paper also describes the method that was used for the routine monitoring of condom availability in these sites. The lot quality assurance sampling (LQAS) method was used for the assessment of the geographical coverage and quality of coverage of condoms in target areas in four states and along selected national highways in India, as part of Avahan, the India AIDS initiative. A significant general increase in condom availability was observed in the intervention area between 2005 and 2008. High coverage rates were gradually achieved through an extensive network of pharmacies and particularly of non-traditional outlets, whereas traditional outlets were instrumental in providing large volumes of condoms. LQAS is seen as a valuable tool for the routine monitoring of the geographical coverage and of the quality of delivery systems of condoms and of health products and services in general. With a relatively small sample size, easy data collection procedures and simple analytical methods, it was possible to inform decision-makers regularly on progress towards coverage targets.
Two months of disdrometer data in the Paris area
NASA Astrophysics Data System (ADS)
Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel
2018-05-01
The Hydrology, Meteorology, and Complexity laboratory of École des Ponts ParisTech (hmco.enpc.fr) has made a data set of optical disdrometer measurements available that come from a campaign involving three collocated devices from two different manufacturers, relying on different underlying technologies (one Campbell Scientific PWS100 and two OTT Parsivel2 instruments). The campaign took place in January-February 2016 in the Paris area (France). Disdrometers provide access to information on the size and velocity of drops falling through the sampling area of the devices of roughly a few tens of cm2. It enables the drop size distribution to be estimated and rainfall microphysics, kinetic energy, or radar quantities, for example, to be studied further. Raw data, i.e. basically a matrix containing a number of drops according to classes of size and velocity, along with more aggregated ones, such as the rain rate or drop size distribution with filtering, are available. Link to the data set: https://zenodo.org/record/1240168 (DOI: https://doi.org/10.5281/zenodo.1240168).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Popple, R; Wu, X; Kraus, J
2016-06-15
Purpose: Patient specific quality assurance of stereotactic radiosurgery (SRS) plans is challenging because of small target sizes and high dose gradients. We compared three detectors for dosimetry of VMAT SRS plans. Methods: The dose at the center of seventeen targets was measured using a synthetic diamond detector (2.2 mm diameter, 1 µm thickness), a 0.007 cm{sup 3} ionization chamber, and radiochromic film. Measurements were made in a PMMA phantom in the clinical geometry – all gantry and table angles were delivered as planned. The diamond and chamber positions were offset by 1 cm from the film plane, so the isocentermore » was shifted accordingly to place the center of the target at the detector of interest. To ensure accurate detector placement, the phantom was positioned using kV images. To account for the shift-induced difference in geometry and differing prescription doses between plans, the measurements were normalized to the expected dose calculated by the treatment planning system. Results: The target sizes ranged from 2.8 mm to 34.8 mm (median 14.8 mm). The mean measurement-to-plan ratios were 1.054, 1.076, and 1.023 for RCF, diamond, and chamber, respectively. The mean difference between the chamber and film was −3.2% and between diamond and film was 2.2%. For targets larger than 15 mm, the mean difference relative to film was −0.8% and 0.1% for chamber and diamond, respectively, whereas for targets smaller than 15 mm, the difference was −5.3% and 4.2% for chamber and diamond, respectively. The difference was significant (p=0.005) using the two-sample Kolmogorov-Smirnov test. Conclusion: The detectors agree for target sizes larger than 15 mm. Relative to film, for smaller targets the diamond detector over-responds, whereas the ionization chamber under-responds. Further work is needed to characterize detector response in modulated SRS fields.« less
Process R&D for Particle Size Control of Molybdenum Oxide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Sujat; Dzwiniel, Trevor; Pupek, Krzysztof
The primary goal of this study was to produce MoO 3 powder with a particle size range of 50 to 200 μm for use in targets for production of the medical isotope 99Mo. Molybdenum metal powder is commercially produced by thermal reduction of oxides in a hydrogen atmosphere. The most common source material is MoO 3, which is derived by the thermal decomposition of ammonium heptamolybdate (AHM). However, the particle size of the currently produced MoO 3 is too small, resulting in Mo powder that is too fine to properly sinter and press into the desired target. In this study,more » effects of heating rate, heating temperature, gas type, gas flow rate, and isothermal heating were investigated for the decomposition of AHM. The main conclusions were as follows: lower heating rate (2-10°C/min) minimizes breakdown of aggregates, recrystallized samples with millimeter-sized aggregates are resistant to various heat treatments, extended isothermal heating at >600°C leads to significant sintering, and inert gas and high gas flow rate (up to 2000 ml/min) did not significantly affect particle size distribution or composition. In addition, attempts to recover AHM from an aqueous solution by several methods (spray drying, precipitation, and low temperature crystallization) failed to achieve the desired particle size range of 50 to 200 μm. Further studies are planned.« less
He, Man; Huang, Lijin; Zhao, Bingshan; Chen, Beibei; Hu, Bin
2017-06-22
For the determination of trace elements and their species in various real samples by inductively coupled plasma mass spectrometry (ICP-MS), solid phase extraction (SPE) is a commonly used sample pretreatment technique to remove complex matrix, pre-concentrate target analytes and make the samples suitable for subsequent sample introduction and measurements. The sensitivity, selectivity/anti-interference ability, sample throughput and application potential of the methodology of SPE-ICP-MS are greatly dependent on SPE adsorbents. This article presents a general overview of the use of advanced functional materials (AFMs) in SPE for ICP-MS determination of trace elements and their species in the past decade. Herein the AFMs refer to the materials featuring with high adsorption capacity, good selectivity, fast adsorption/desorption dynamics and satisfying special requirements in real sample analysis, including nanometer-sized materials, porous materials, ion imprinting polymers, restricted access materials and magnetic materials. Carbon/silica/metal/metal oxide nanometer-sized adsorbents with high surface area and plenty of adsorption sites exhibit high adsorption capacity, and porous adsorbents would provide more adsorption sites and faster adsorption dynamics. The selectivity of the materials for target elements/species can be improved by using physical/chemical modification, ion imprinting and restricted accessed technique. Magnetic adsorbents in conventional batch operation offer unique magnetic response and high surface area-volume ratio which provide a very easy phase separation, greater extraction capacity and efficiency over conventional adsorbents, and chip-based magnetic SPE provides a versatile platform for special requirement (e.g. cell analysis). The performance of these adsorbents for the determination of trace elements and their species in different matrices by ICP-MS is discussed in detail, along with perspectives and possible challenges in the future development. Copyright © 2017 Elsevier B.V. All rights reserved.
Selecting the optimum plot size for a California design-based stream and wetland mapping program.
Lackey, Leila G; Stein, Eric D
2014-04-01
Accurate estimates of the extent and distribution of wetlands and streams are the foundation of wetland monitoring, management, restoration, and regulatory programs. Traditionally, these estimates have relied on comprehensive mapping. However, this approach is prohibitively resource-intensive over large areas, making it both impractical and statistically unreliable. Probabilistic (design-based) approaches to evaluating status and trends provide a more cost-effective alternative because, compared with comprehensive mapping, overall extent is inferred from mapping a statistically representative, randomly selected subset of the target area. In this type of design, the size of sample plots has a significant impact on program costs and on statistical precision and accuracy; however, no consensus exists on the appropriate plot size for remote monitoring of stream and wetland extent. This study utilized simulated sampling to assess the performance of four plot sizes (1, 4, 9, and 16 km(2)) for three geographic regions of California. Simulation results showed smaller plot sizes (1 and 4 km(2)) were most efficient for achieving desired levels of statistical accuracy and precision. However, larger plot sizes were more likely to contain rare and spatially limited wetland subtypes. Balancing these considerations led to selection of 4 km(2) for the California status and trends program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu Huijun; Gordon, J. James; Siebers, Jeffrey V.
2011-02-15
Purpose: A dosimetric margin (DM) is the margin in a specified direction between a structure and a specified isodose surface, corresponding to a prescription or tolerance dose. The dosimetric margin distribution (DMD) is the distribution of DMs over all directions. Given a geometric uncertainty model, representing inter- or intrafraction setup uncertainties or internal organ motion, the DMD can be used to calculate coverage Q, which is the probability that a realized target or organ-at-risk (OAR) dose metric D{sub v} exceeds the corresponding prescription or tolerance dose. Postplanning coverage evaluation quantifies the percentage of uncertainties for which target and OAR structuresmore » meet their intended dose constraints. The goal of the present work is to evaluate coverage probabilities for 28 prostate treatment plans to determine DMD sampling parameters that ensure adequate accuracy for postplanning coverage estimates. Methods: Normally distributed interfraction setup uncertainties were applied to 28 plans for localized prostate cancer, with prescribed dose of 79.2 Gy and 10 mm clinical target volume to planning target volume (CTV-to-PTV) margins. Using angular or isotropic sampling techniques, dosimetric margins were determined for the CTV, bladder and rectum, assuming shift invariance of the dose distribution. For angular sampling, DMDs were sampled at fixed angular intervals {omega} (e.g., {omega}=1 deg., 2 deg., 5 deg., 10 deg., 20 deg.). Isotropic samples were uniformly distributed on the unit sphere resulting in variable angular increments, but were calculated for the same number of sampling directions as angular DMDs, and accordingly characterized by the effective angular increment {omega}{sub eff}. In each direction, the DM was calculated by moving the structure in radial steps of size {delta}(=0.1,0.2,0.5,1 mm) until the specified isodose was crossed. Coverage estimation accuracy {Delta}Q was quantified as a function of the sampling parameters {omega} or {omega}{sub eff} and {delta}. Results: The accuracy of coverage estimates depends on angular and radial DMD sampling parameters {omega} or {omega}{sub eff} and {delta}, as well as the employed sampling technique. Target |{Delta}Q|<1% and OAR |{Delta}Q|<3% can be achieved with sampling parameters {omega} or {omega}{sub eff}=20 deg., {delta}=1 mm. Better accuracy (target |{Delta}Q|<0.5% and OAR |{Delta}Q|<{approx}1%) can be achieved with {omega} or {omega}{sub eff}=10 deg., {delta}=0.5 mm. As the number of sampling points decreases, the isotropic sampling method maintains better accuracy than fixed angular sampling. Conclusions: Coverage estimates for post-planning evaluation are essential since coverage values of targets and OARs often differ from the values implied by the static margin-based plans. Finer sampling of the DMD enables more accurate assessment of the effect of geometric uncertainties on coverage estimates prior to treatment. DMD sampling with {omega} or {omega}{sub eff}=10 deg. and {delta}=0.5 mm should be adequate for planning purposes.« less
Xu, Huijun; Gordon, J James; Siebers, Jeffrey V
2011-02-01
A dosimetric margin (DM) is the margin in a specified direction between a structure and a specified isodose surface, corresponding to a prescription or tolerance dose. The dosimetric margin distribution (DMD) is the distribution of DMs over all directions. Given a geometric uncertainty model, representing inter- or intrafraction setup uncertainties or internal organ motion, the DMD can be used to calculate coverage Q, which is the probability that a realized target or organ-at-risk (OAR) dose metric D, exceeds the corresponding prescription or tolerance dose. Postplanning coverage evaluation quantifies the percentage of uncertainties for which target and OAR structures meet their intended dose constraints. The goal of the present work is to evaluate coverage probabilities for 28 prostate treatment plans to determine DMD sampling parameters that ensure adequate accuracy for postplanning coverage estimates. Normally distributed interfraction setup uncertainties were applied to 28 plans for localized prostate cancer, with prescribed dose of 79.2 Gy and 10 mm clinical target volume to planning target volume (CTV-to-PTV) margins. Using angular or isotropic sampling techniques, dosimetric margins were determined for the CTV, bladder and rectum, assuming shift invariance of the dose distribution. For angular sampling, DMDs were sampled at fixed angular intervals w (e.g., w = 1 degree, 2 degrees, 5 degrees, 10 degrees, 20 degrees). Isotropic samples were uniformly distributed on the unit sphere resulting in variable angular increments, but were calculated for the same number of sampling directions as angular DMDs, and accordingly characterized by the effective angular increment omega eff. In each direction, the DM was calculated by moving the structure in radial steps of size delta (=0.1, 0.2, 0.5, 1 mm) until the specified isodose was crossed. Coverage estimation accuracy deltaQ was quantified as a function of the sampling parameters omega or omega eff and delta. The accuracy of coverage estimates depends on angular and radial DMD sampling parameters omega or omega eff and delta, as well as the employed sampling technique. Target deltaQ/ < l% and OAR /deltaQ/ < 3% can be achieved with sampling parameters omega or omega eef = 20 degrees, delta =1 mm. Better accuracy (target /deltaQ < 0.5% and OAR /deltaQ < approximately 1%) can be achieved with omega or omega eff = 10 degrees, delta = 0.5 mm. As the number of sampling points decreases, the isotropic sampling method maintains better accuracy than fixed angular sampling. Coverage estimates for post-planning evaluation are essential since coverage values of targets and OARs often differ from the values implied by the static margin-based plans. Finer sampling of the DMD enables more accurate assessment of the effect of geometric uncertainties on coverage estimates prior to treatment. DMD sampling with omega or omega eff = 10 degrees and delta = 0.5 mm should be adequate for planning purposes.
Effects of Pre-Existing Target Structure on the Formation of Large Craters
NASA Technical Reports Server (NTRS)
Barnouin-Jha, O. S.; Cintala, M. J.; Crawford, D. A.
2003-01-01
The shapes of large-scale craters and the mechanics responsible for melt generation are influenced by broad and small-scale structures present in a target prior to impact. For example, well-developed systems of fractures often create craters that appear square in outline, good examples being Meteor Crater, AZ and the square craters of 433 Eros. Pre-broken target material also affects melt generation. Kieffer has shown how the shock wave generated in Coconino sandstone at Meteor crater created reverberations which, in combination with the natural target heterogeneity present, created peaks and troughs in pressure and compressed density as individual grains collided to produce a range of shock mineralogies and melts within neighboring samples. In this study, we further explore how pre-existing target structure influences various aspects of the cratering process. We combine experimental and numerical techniques to explore the connection between the scales of the impact generated shock wave and the pre-existing target structure. We focus on the propagation of shock waves in coarse, granular media, emphasizing its consequences on excavation, crater growth, ejecta production, cratering efficiency, melt generation, and crater shape. As a baseline, we present a first series of results for idealized targets where the particles are all identical in size and possess the same shock impedance. We will also present a few results, whereby we increase the complexities of the target properties by varying the grain size, strength, impedance and frictional properties. In addition, we investigate the origin and implications of reverberations that are created by the presence of physical and chemical heterogeneity in a target.
Methane Leaks from Natural Gas Systems Follow Extreme Distributions.
Brandt, Adam R; Heath, Garvin A; Cooley, Daniel
2016-11-15
Future energy systems may rely on natural gas as a low-cost fuel to support variable renewable power. However, leaking natural gas causes climate damage because methane (CH 4 ) has a high global warming potential. In this study, we use extreme-value theory to explore the distribution of natural gas leak sizes. By analyzing ∼15 000 measurements from 18 prior studies, we show that all available natural gas leakage data sets are statistically heavy-tailed, and that gas leaks are more extremely distributed than other natural and social phenomena. A unifying result is that the largest 5% of leaks typically contribute over 50% of the total leakage volume. While prior studies used log-normal model distributions, we show that log-normal functions poorly represent tail behavior. Our results suggest that published uncertainty ranges of CH 4 emissions are too narrow, and that larger sample sizes are required in future studies to achieve targeted confidence intervals. Additionally, we find that cross-study aggregation of data sets to increase sample size is not recommended due to apparent deviation between sampled populations. Understanding the nature of leak distributions can improve emission estimates, better illustrate their uncertainty, allow prioritization of source categories, and improve sampling design. Also, these data can be used for more effective design of leak detection technologies.
Methane Leaks from Natural Gas Systems Follow Extreme Distributions
Brandt, Adam R.; Heath, Garvin A.; Cooley, Daniel
2016-10-14
Future energy systems may rely on natural gas as a low-cost fuel to support variable renewable power. However, leaking natural gas causes climate damage because methane (CH 4) has a high global warming potential. In this study, we use extreme-value theory to explore the distribution of natural gas leak sizes. By analyzing ~15,000 measurements from 18 prior studies, we show that all available natural gas leakage datasets are statistically heavy-tailed, and that gas leaks are more extremely distributed than other natural and social phenomena. A unifying result is that the largest 5% of leaks typically contribute over 50% of themore » total leakage volume. While prior studies used lognormal model distributions, we show that lognormal functions poorly represent tail behavior. Our results suggest that published uncertainty ranges of CH 4 emissions are too narrow, and that larger sample sizes are required in future studies to achieve targeted confidence intervals. Additionally, we find that cross-study aggregation of datasets to increase sample size is not recommended due to apparent deviation between sampled populations. Finally, understanding the nature of leak distributions can improve emission estimates, better illustrate their uncertainty, allow prioritization of source categories, and improve sampling design. Also, these data can be used for more effective design of leak detection technologies.« less
Methane Leaks from Natural Gas Systems Follow Extreme Distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandt, Adam R.; Heath, Garvin A.; Cooley, Daniel
Future energy systems may rely on natural gas as a low-cost fuel to support variable renewable power. However, leaking natural gas causes climate damage because methane (CH 4) has a high global warming potential. In this study, we use extreme-value theory to explore the distribution of natural gas leak sizes. By analyzing ~15,000 measurements from 18 prior studies, we show that all available natural gas leakage datasets are statistically heavy-tailed, and that gas leaks are more extremely distributed than other natural and social phenomena. A unifying result is that the largest 5% of leaks typically contribute over 50% of themore » total leakage volume. While prior studies used lognormal model distributions, we show that lognormal functions poorly represent tail behavior. Our results suggest that published uncertainty ranges of CH 4 emissions are too narrow, and that larger sample sizes are required in future studies to achieve targeted confidence intervals. Additionally, we find that cross-study aggregation of datasets to increase sample size is not recommended due to apparent deviation between sampled populations. Finally, understanding the nature of leak distributions can improve emission estimates, better illustrate their uncertainty, allow prioritization of source categories, and improve sampling design. Also, these data can be used for more effective design of leak detection technologies.« less
THE OCCURRENCE RATE OF SMALL PLANETS AROUND SMALL STARS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dressing, Courtney D.; Charbonneau, David, E-mail: cdressing@cfa.harvard.edu
We use the optical and near-infrared photometry from the Kepler Input Catalog to provide improved estimates of the stellar characteristics of the smallest stars in the Kepler target list. We find 3897 dwarfs with temperatures below 4000 K, including 64 planet candidate host stars orbited by 95 transiting planet candidates. We refit the transit events in the Kepler light curves for these planet candidates and combine the revised planet/star radius ratios with our improved stellar radii to revise the radii of the planet candidates orbiting the cool target stars. We then compare the number of observed planet candidates to themore » number of stars around which such planets could have been detected in order to estimate the planet occurrence rate around cool stars. We find that the occurrence rate of 0.5-4 R{sub Circled-Plus} planets with orbital periods shorter than 50 days is 0.90{sup +0.04}{sub -0.03} planets per star. The occurrence rate of Earth-size (0.5-1.4 R{sub Circled-Plus }) planets is constant across the temperature range of our sample at 0.51{sub -0.05}{sup +0.06} Earth-size planets per star, but the occurrence of 1.4-4 R{sub Circled-Plus} planets decreases significantly at cooler temperatures. Our sample includes two Earth-size planet candidates in the habitable zone, allowing us to estimate that the mean number of Earth-size planets in the habitable zone is 0.15{sup +0.13}{sub -0.06} planets per cool star. Our 95% confidence lower limit on the occurrence rate of Earth-size planets in the habitable zones of cool stars is 0.04 planets per star. With 95% confidence, the nearest transiting Earth-size planet in the habitable zone of a cool star is within 21 pc. Moreover, the nearest non-transiting planet in the habitable zone is within 5 pc with 95% confidence.« less
40 CFR 92.114 - Exhaust gas and particulate sampling and analytical system.
Code of Federal Regulations, 2011 CFR
2011-07-01
... downstream of the analyzer. The gauge tap must be within 2 inches of the analyzer exit port. Gauge... must be used. The gauge tap must be within 2 inches of the analyzer entrance port. (vi) Calibration or.... Equivalent loadings (0.5 mg/1075 mm2 stain area) shall be used as target loadings when other filter sizes are...
40 CFR 92.114 - Exhaust gas and particulate sampling and analytical system.
Code of Federal Regulations, 2010 CFR
2010-07-01
... downstream of the analyzer. The gauge tap must be within 2 inches of the analyzer exit port. Gauge... must be used. The gauge tap must be within 2 inches of the analyzer entrance port. (vi) Calibration or.... Equivalent loadings (0.5 mg/1075 mm2 stain area) shall be used as target loadings when other filter sizes are...
The Missing Link: Workplace Education in Small Business.
ERIC Educational Resources Information Center
BCEL Newsletter for the Business & Literacy Communities, 1992
1992-01-01
A study sought to determine how and why small businesses invest or do not invest in basic skills instruction for their workers. Data were gathered through a national mail and telephone survey of a random sampling of 11,000 small (50 or fewer employees) and medium-sized (51-400 employees) firms, a targeted mail survey of 4,317 manufacturers, a…
Schouten, Jan P.; McElgunn, Cathal J.; Waaijer, Raymond; Zwijnenburg, Danny; Diepvens, Filip; Pals, Gerard
2002-01-01
We describe a new method for relative quantification of 40 different DNA sequences in an easy to perform reaction requiring only 20 ng of human DNA. Applications shown of this multiplex ligation-dependent probe amplification (MLPA) technique include the detection of exon deletions and duplications in the human BRCA1, MSH2 and MLH1 genes, detection of trisomies such as Down’s syndrome, characterisation of chromosomal aberrations in cell lines and tumour samples and SNP/mutation detection. Relative quantification of mRNAs by MLPA will be described elsewhere. In MLPA, not sample nucleic acids but probes added to the samples are amplified and quantified. Amplification of probes by PCR depends on the presence of probe target sequences in the sample. Each probe consists of two oligonucleotides, one synthetic and one M13 derived, that hybridise to adjacent sites of the target sequence. Such hybridised probe oligonucleotides are ligated, permitting subsequent amplification. All ligated probes have identical end sequences, permitting simultaneous PCR amplification using only one primer pair. Each probe gives rise to an amplification product of unique size between 130 and 480 bp. Probe target sequences are small (50–70 nt). The prerequisite of a ligation reaction provides the opportunity to discriminate single nucleotide differences. PMID:12060695
Schouten, Jan P; McElgunn, Cathal J; Waaijer, Raymond; Zwijnenburg, Danny; Diepvens, Filip; Pals, Gerard
2002-06-15
We describe a new method for relative quantification of 40 different DNA sequences in an easy to perform reaction requiring only 20 ng of human DNA. Applications shown of this multiplex ligation-dependent probe amplification (MLPA) technique include the detection of exon deletions and duplications in the human BRCA1, MSH2 and MLH1 genes, detection of trisomies such as Down's syndrome, characterisation of chromosomal aberrations in cell lines and tumour samples and SNP/mutation detection. Relative quantification of mRNAs by MLPA will be described elsewhere. In MLPA, not sample nucleic acids but probes added to the samples are amplified and quantified. Amplification of probes by PCR depends on the presence of probe target sequences in the sample. Each probe consists of two oligonucleotides, one synthetic and one M13 derived, that hybridise to adjacent sites of the target sequence. Such hybridised probe oligonucleotides are ligated, permitting subsequent amplification. All ligated probes have identical end sequences, permitting simultaneous PCR amplification using only one primer pair. Each probe gives rise to an amplification product of unique size between 130 and 480 bp. Probe target sequences are small (50-70 nt). The prerequisite of a ligation reaction provides the opportunity to discriminate single nucleotide differences.
Transport Loss Estimation of Fine Particulate Matter in Sampling Tube Based on Numerical Computation
NASA Astrophysics Data System (ADS)
Luo, L.; Cheng, Z.
2016-12-01
In-situ measurement of PM2.5 physical and chemical properties is one substantial approach for the mechanism investigation of PM2.5 pollution. Minimizing PM2.5 transport loss in sampling tube is essential for ensuring the accuracy of the measurement result. In order to estimate the integrated PM2.5 transport efficiency in sampling tube and optimize tube designs, the effects of different tube factors (length, bore size and bend number) on the PM2.5 transport were analyzed based on the numerical computation. The results shows that PM2.5 mass concentration transport efficiency of vertical tube with flowrate at 20.0 L·min-1, bore size at 4 mm, length at 1.0 m was 89.6%. However, the transport efficiency will increase to 98.3% when the bore size is increased to 14 mm. PM2.5 mass concentration transport efficiency of horizontal tube with flowrate at 1.0 L·min-1, bore size at 4mm, length at 10.0 m is 86.7%, increased to 99.2% with length at 0.5 m. Low transport efficiency of 85.2% for PM2.5 mass concentration is estimated in bend with flowrate at 20.0 L·min-1, bore size at 4mm, curvature angle at 90o. Laminar flow of air in tube through keeping the ratio of flowrate (L·min-1) and bore size (mm) less than 1.4 is beneficial to decrease the PM2.5 transport loss. For the target of PM2.5 transport efficiency higher than 97%, it is advised to use vertical sampling tubes with length less than 6.0 m for the flowrates of 2.5, 5.0, 10.0 L·min-1 and bore size larger than 12 mm for the flowrates of 16.7 or 20.0 L·min-1. For horizontal sampling tubes, tube length is decided by the ratio of flowrate and bore size. Meanwhile, it is suggested to decrease the amount of the bends in tube of turbulent flow.
Zhang, Zhifei; Song, Yang; Cui, Haochen; Wu, Jayne; Schwartz, Fernando; Qi, Hairong
2017-09-01
Bucking the trend of big data, in microdevice engineering, small sample size is common, especially when the device is still at the proof-of-concept stage. The small sample size, small interclass variation, and large intraclass variation, have brought biosignal analysis new challenges. Novel representation and classification approaches need to be developed to effectively recognize targets of interests with the absence of a large training set. Moving away from the traditional signal analysis in the spatiotemporal domain, we exploit the biosignal representation in the topological domain that would reveal the intrinsic structure of point clouds generated from the biosignal. Additionally, we propose a Gaussian-based decision tree (GDT), which can efficiently classify the biosignals even when the sample size is extremely small. This study is motivated by the application of mastitis detection using low-voltage alternating current electrokinetics (ACEK) where five categories of bisignals need to be recognized with only two samples in each class. Experimental results demonstrate the robustness of the topological features as well as the advantage of GDT over some conventional classifiers in handling small dataset. Our method reduces the voltage of ACEK to a safe level and still yields high-fidelity results with a short assay time. This paper makes two distinctive contributions to the field of biosignal analysis, including performing signal processing in the topological domain and handling extremely small dataset. Currently, there have been no related works that can efficiently tackle the dilemma between avoiding electrochemical reaction and accelerating assay process using ACEK.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aushev, A A; Barinov, S P; Vasin, M G
2015-06-30
We present the results of employing the alpha-spectrometry method to determine the characteristics of porous materials used in targets for laser plasma experiments. It is shown that the energy spectrum of alpha-particles, after their passage through porous samples, allows one to determine the distribution of their path length in the foam skeleton. We describe the procedure of deriving such a distribution, excluding both the distribution broadening due to statistical nature of the alpha-particle interaction with an atomic structure (straggling) and hardware effects. The fractal analysis of micro-images is applied to the same porous surface samples that have been studied bymore » alpha-spectrometry. The fractal dimension and size distribution of the number of the foam skeleton grains are obtained. Using the data obtained, a distribution of the total foam skeleton thickness along a chosen direction is constructed. It roughly coincides with the path length distribution of alpha-particles within a range of larger path lengths. It is concluded that the combined use of the alpha-spectrometry method and fractal analysis of images will make it possible to determine the size distribution of foam skeleton grains (or pores). The results can be used as initial data in theoretical studies on propagation of the laser and X-ray radiation in specific porous samples. (laser plasma)« less
NASA Astrophysics Data System (ADS)
Flynn, George J.; Durda, Daniel D.
2004-10-01
We performed impact disruption experiments on pieces from eight different anhydrous chondritic meteorites - four weathered ordinary chondrite finds from North Africa (NWA791, NWA620, NWA869 and MOR001), three almost unweathered ordinary chondrite falls (Mbale, Gao, and Saratov), and an almost unweathered carbonaceous chondrite fall (Allende). In each case the impactor was a small (1/8 or 1/4 in) aluminum sphere fired at the meteorite target at ˜5km/s, comparable to the mean collision speed in the main-belt. Some of the ˜5 to ˜150μm debris from each disruption was collected in aerogel capture cells, and the captured particles were analyzed by in situ synchrotron-based X-ray fluorescence. For each meteorite, many of the smallest particles ( <10μm up to 35μm in size, depending on the meteorite) exhibit very high Ni/Fe ratios compared to the Ni/Fe ratios measured in the larger particles (>45μm), a composition consistent with the smallest debris being dominated by matrix material while the larger debris is dominated by fragments from olivine chondrules. These results may explain why the ˜10μm interplanetary dust particles (IDPs) collected from the Earth's stratosphere are C-rich and volatile-rich compared to the presumed solar nebula composition. The ˜10μm IDPs may simply sample the matrix of an inhomogeneous parent body, structurally and mineralogically similar to the chondritic meteorites, which are inhomogeneous assemblages of compact, strong, C- and volatile-poor chondrules that are distributed in a more porous, C- and volatile-rich matrix. In addition, these results may explain why the micrometeorites, which are ˜50μm to millimeters in size, recovered from the polar ices are Ni- and S-poor compared to chondritic meteorites, since these polar micrometeorites may preferentially sample fragments from the Ni- and S-poor olivine chondrules. These results indicate that the average composition of the IDPs may be biased towards the composition of the matrix of the parent body while the average composition of the polar micrometeorites may be more heavily weighted towards the composition of the chondrules and clasts. Thus, neither the IDPs nor the polar micrometeorites may sample the bulk composition of their respective parent bodies. We determined the threshold collisional specific energy (QD*) for these chondritic meteorites to be 1419 J/kg, about twice the value for terrestrial basalt. Comparison of the mass of the largest fragment produced in the disruption of an ˜100g sample of the porous ordinary chondrite Saratov with the largest fragment produced in the disruption of an ˜100g sample of the compact ordinary chondrite MOR001 when each was struck by an impactor having approximately the same kinetic energy confirms that it requires significantly more energy to disrupt a porous target than a non-porous target. These results may also have important implications for the design of spacecraft missions intended to sample the composition and mineralogy of the chondritic asteroids and other inhomogeneous bodies. A Stardust-like spacecraft intended to sample asteroids by collecting only the small debris from a man-made impact onto the asteroid may collect particles that over-sample the matrix of the target and do not provide a representative sample of the bulk composition. The impact collection technique to be employed by the Japanese HAYABUSA (formerly MUSES-C) spacecraft to sample the asteroid Itokawa may result in similar mineral segregation.
Capture of shrinking targets with realistic shrink patterns.
Hoffmann, Errol R; Chan, Alan H S; Dizmen, Coskun
2013-01-01
Previous research [Hoffmann, E. R. 2011. "Capture of Shrinking Targets." Ergonomics 54 (6): 519-530] reported experiments for capture of shrinking targets where the target decreased in size at a uniform rate. This work extended this research for targets having a shrink-size versus time pattern that of an aircraft receding from an observer. In Experiment 1, the time to capture the target in this case was well correlated in terms of Fitts' index of difficulty, measured at the time of capture of the target, a result that is in agreement with the 'balanced' model of Johnson and Hart [Johnson, W. W., and Hart, S. G. 1987. "Step Tracking Shrinking Targets." Proceedings of the human factors society 31st annual meeting, New York City, October 1987, 248-252]. Experiment 2 measured the probability of target capture for varying initial target sizes and target shrink rates constant, defined as the time for the target to shrink to half its initial size. Data of shrink time constant for 50% probability of capture were related to initial target size but did not greatly affect target capture as the rate of target shrinking decreased rapidly with time.
Sonnenburg, Jana; Schulz, Katja; Blome, Sandra; Staubach, Christoph
2016-10-01
Classical swine fever (CSF) is one of the most important viral diseases of domestic pigs ( Sus scrofa domesticus) and wild boar ( Sus scrofa ). For at least 4 decades, several European Union member states were confronted with outbreaks among wild boar and, as it had been shown that infected wild boar populations can be a major cause of primary outbreaks in domestic pigs, strict control measures for both species were implemented. To guarantee early detection and to demonstrate freedom from disease, intensive surveillance is carried out based on a hunting bag sample. In this context, virologic investigations play a major role in the early detection of new introductions and in regions immunized with a conventional vaccine. The required financial resources and personnel for reliable testing are often large, and sufficient sample sizes to detect low virus prevalences are difficult to obtain. We conducted a simulation to model the possible impact of changes in sample size and sampling intervals on the probability of CSF virus detection based on a study area of 65 German hunting grounds. A 5-yr period with 4,652 virologic investigations was considered. Results suggest that low prevalences could not be detected with a justifiable effort. The simulation of increased sample sizes per sampling interval showed only a slightly better performance but would be unrealistic in practice, especially outside the main hunting season. Further studies on other approaches such as targeted or risk-based sampling for virus detection in connection with (marker) antibody surveillance are needed.
Active colloids as mobile microelectrodes for unified label-free selective cargo transport.
Boymelgreen, Alicia M; Balli, Tov; Miloh, Touvia; Yossifon, Gilad
2018-02-22
Utilization of active colloids to transport both biological and inorganic cargo has been widely examined in the context of applications ranging from targeted drug delivery to sample analysis. In general, carriers are customized to load one specific target via a mechanism distinct from that driving the transport. Here we unify these tasks and extend loading capabilities to include on-demand selection of multiple nano/micro-sized targets without the need for pre-labelling or surface functionalization. An externally applied electric field is singularly used to drive the active cargo carrier and transform it into a mobile floating electrode that can attract (trap) or repel specific targets from its surface by dielectrophoresis, enabling dynamic control of target selection, loading and rate of transport via the electric field parameters. In the future, dynamic selectivity could be combined with directed motion to develop building blocks for bottom-up fabrication in applications such as additive manufacturing and soft robotics.
Clinical Trials Targeting Aging and Age-Related Multimorbidity
Crimmins, Eileen M; Grossardt, Brandon R; Crandall, Jill P; Gelfond, Jonathan A L; Harris, Tamara B; Kritchevsky, Stephen B; Manson, JoAnn E; Robinson, Jennifer G; Rocca, Walter A; Temprosa, Marinella; Thomas, Fridtjof; Wallace, Robert; Barzilai, Nir
2017-01-01
Abstract Background There is growing interest in identifying interventions that may increase health span by targeting biological processes underlying aging. The design of efficient and rigorous clinical trials to assess these interventions requires careful consideration of eligibility criteria, outcomes, sample size, and monitoring plans. Methods Experienced geriatrics researchers and clinical trialists collaborated to provide advice on clinical trial design. Results Outcomes based on the accumulation and incidence of age-related chronic diseases are attractive for clinical trials targeting aging. Accumulation and incidence rates of multimorbidity outcomes were developed by selecting at-risk subsets of individuals from three large cohort studies of older individuals. These provide representative benchmark data for decisions on eligibility, duration, and assessment protocols. Monitoring rules should be sensitive to targeting aging-related, rather than disease-specific, outcomes. Conclusions Clinical trials targeting aging are feasible, but require careful design consideration and monitoring rules. PMID:28364543
Thermophysical Characteristics of OSIRIS-REx Target Asteroid (101955) Bennu
NASA Astrophysics Data System (ADS)
Yu, Liangliang; Ji, Jianghui
2016-01-01
In this work, we investigate the thermophysical properties, including thermal inertia, roughness fraction and surface grain size of OSIRIS-REx target asteroid (101955) Bennu by using a thermophysical model with the recently updated 3D radar-derived shape model (Nolan et al., 2013) and mid-infrared observations (Müller et al. 2012, Emery et al., 2014). We find that the asteroid bears an effective diameter of 510+6 -40 m, a geometric albedo of 0.047+0.0083 -0.0011, a roughness fraction of 0.04+0.26 -0.04, and thermal inertia of 240+440 -60 Jm-2s-0.5K-1 for our best-fit solution. The best-estimate thermal inertia suggests that fine-grained regolith may cover a large portion of Bennu's surface, where a grain size may vary from 1.3 to 31 mm. Our outcome suggests that Bennu is suitable for the OSIRIS-REx mission to return samples to Earth.
Experimental Effects on IR Reflectance Spectra: Particle Size and Morphology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beiswenger, Toya N.; Myers, Tanya L.; Brauer, Carolyn S.
For geologic and extraterrestrial samples it is known that both particle size and morphology can have strong effects on the species’ infrared reflectance spectra. Due to such effects, the reflectance spectra cannot be predicted from the absorption coefficients alone. This is because reflectance is both a surface as well as a bulk phenomenon, incorporating both dispersion as well as absorption effects. The same spectral features can even be observed as either a maximum or minimum. The complex effects depend on particle size and preparation, as well as the relative amplitudes of the optical constants n and k, i.e. the realmore » and imaginary components of the complex refractive index. While somewhat oversimplified, upward-going amplitude in the reflectance spectrum usually result from surface scattering, i.e. rays that have been reflected from the surface without penetration, whereas downward-going peaks are due to either absorption or volume scattering, i.e. rays that have penetrated or refracted into the sample interior and are not reflected. While the effects are well known, we report seminal measurements of reflectance along with quantified particle size of the samples, the sizing obtained from optical microscopy measurements. The size measurements are correlated with the reflectance spectra in the 1.3 – 16 micron range for various bulk materials that have a combination of strong and weak absorption bands in order to understand the effects on the spectral features as a function of the mean grain size of the sample. We report results for both sodium sulfate Na2SO4 as well as ammonium sulfate (NH4)2SO4; the optical constants have been measured for (NH4)2SO4. To go a step further from the field to the laboratory we explore our understanding of particle size effects on reflectance spectra in the field using standoff detection. This has helped identify weaknesses and strengths in detection using standoff distances of up 160 meters away from the Target. The studies have shown that particle size has an enormous influence on the measured reflectance spectra of such materials; successful identification requires sufficient, representative reflectance data to include the particle sizes of interest.« less
Laser-driven hydrothermal process studied with excimer laser pulses
NASA Astrophysics Data System (ADS)
Mariella, Raymond; Rubenchik, Alexander; Fong, Erika; Norton, Mary; Hollingsworth, William; Clarkson, James; Johnsen, Howard; Osborn, David L.
2017-08-01
Previously, we discovered [Mariella et al., J. Appl. Phys. 114, 014904 (2013)] that modest-fluence/modest-intensity 351-nm laser pulses, with insufficient fluence/intensity to ablate rock, mineral, or concrete samples via surface vaporization, still removed the surface material from water-submerged target samples with confinement of the removed material, and then dispersed at least some of the removed material into the water as a long-lived suspension of nanoparticles. We called this new process, which appears to include the generation of larger colorless particles, "laser-driven hydrothermal processing" (LDHP) [Mariella et al., J. Appl. Phys. 114, 014904 (2013)]. We, now, report that we have studied this process using 248-nm and 193-nm laser light on submerged concrete, quartzite, and obsidian, and, even though light at these wavelengths is more strongly absorbed than at 351 nm, we found that the overall efficiency of LDHP, in terms of the mass of the target removed per Joule of laser-pulse energy, is lower with 248-nm and 193-nm laser pulses than with 351-nm laser pulses. Given that stronger absorption creates higher peak surface temperatures for comparable laser fluence and intensity, it was surprising to observe reduced efficiencies for material removal. We also measured the nascent particle-size distributions that LDHP creates in the submerging water and found that they do not display the long tail towards larger particle sizes that we had observed when there had been a multi-week delay between experiments and the date of measuring the size distributions. This is consistent with transient dissolution of the solid surface, followed by diffusion-limited kinetics of nucleation and growth of particles from the resulting thin layer of supersaturated solution at the sample surface.
Measuring Submicron-Sized Fractionated Particulate Matter on Aluminum Impactor Disks
Buchholz, Bruce A.; Zermeño, Paula; Hwang, Hyun-Min; Young, Thomas M.; Guilderson, Thomas P.
2011-01-01
Sub-micron sized airborne particulate matter (PM) is not collected well on regular quartz or glass fiber filter papers. We used a micro-orifice uniform deposit impactor (MOUDI) to fractionate PM into six size fractions and deposit it on specially designed high purity thin aluminum disks. The MOUDI separated PM into fractions 56–100 nm, 100–180 nm, 180–320 nm, 320–560 nm, 560–1000 nm, and 1000–1800 nm. Since the MOUDI has a low flow rate (30 L/min), it takes several days to collect sufficient carbon on 47 mm foil disks. The small carbon mass (20–200 microgram C) and large aluminum substrate (~25 mg Al) present several challenges to production of graphite targets for accelerator mass spectrometry (AMS) analysis. The Al foil consumes large amounts of oxygen as it is heated and tends to melt into quartz combustion tubes, causing gas leaks. We describe sample processing techniques to reliably produce graphitic targets for 14C-AMS analysis of PM deposited on Al impact foils. PMID:22228915
Evolution of egg target size: an analysis of selection on correlated characters.
Podolsky, R D
2001-12-01
In broadcast-spawning marine organisms, chronic sperm limitation should select for traits that improve chances of sperm-egg contact. One mechanism may involve increasing the size of the physical or chemical target for sperm. However, models of fertilization kinetics predict that increasing egg size can reduce net zygote production due to an associated decline in fecundity. An alternate method for increasing physical target size is through addition of energetically inexpensive external structures, such as the jelly coats typical of eggs in species from several phyla. In selection experiments on eggs of the echinoid Dendraster excentricus, in which sperm was used as the agent of selection, eggs with larger overall targets were favored in fertilization. Actual shifts in target size following selection matched quantitative predictions of a model that assumed fertilization was proportional to target size. Jelly volume and ovum volume, two characters that contribute to target size, were correlated both within and among females. A cross-sectional analysis of selection partitioned the independent effects of these characters on fertilization success and showed that they experience similar direct selection pressures. Coupled with data on relative organic costs of the two materials, these results suggest that, under conditions where fertilization is limited by egg target size, selection should favor investment in low-cost accessory structures and may have a relatively weak effect on the evolution of ovum size.
Suspended sediments from upstream tributaries as the source of downstream river sites
NASA Astrophysics Data System (ADS)
Haddadchi, Arman; Olley, Jon
2014-05-01
Understanding the efficiency with which sediment eroded from different sources is transported to the catchment outlet is a key knowledge gap that is critical to our ability to accurately target and prioritise management actions to reduce sediment delivery. Sediment fingerprinting has proven to be an efficient approach to determine the sources of sediment. This study examines the suspended sediment sources from Emu Creek catchment, south eastern Queensland, Australia. In addition to collect suspended sediments from different sites of the streams after the confluence of tributaries and outlet of the catchment, time integrated suspended samples from upper tributaries were used as the source of sediment, instead of using hillslope and channel bank samples. Totally, 35 time-integrated samplers were used to compute the contribution of suspended sediments from different upstream waterways to the downstream sediment sites. Three size fractions of materials including fine sand (63-210 μm), silt (10-63 μm) and fine silt and clay (<10 μm) were used to find the effect of particle size on the contribution of upper sediments as the sources of sediment after river confluences. And then samples were analysed by ICP-MS and -OES to find 41 sediment fingerprints. According to the results of Student's T-distribution mixing model, small creeks in the middle and lower part of the catchment were major source in different size fractions, especially in silt (10-63 μm) samples. Gowrie Creek as covers southern-upstream part of the catchment was a major contributor at the outlet of the catchment in finest size fraction (<10 μm) Large differences between the contributions of suspended sediments from upper tributaries in different size fractions necessitate the selection of appropriate size fraction on sediment tracing in the catchment and also major effect of particle size on the movement and deposition of sediments.
A model-based approach to sample size estimation in recent onset type 1 diabetes.
Bundy, Brian N; Krischer, Jeffrey P
2016-11-01
The area under the curve C-peptide following a 2-h mixed meal tolerance test from 498 individuals enrolled on five prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrolment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors, and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in observed versus expected calculations to estimate the presumption of benefit in ongoing trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Chu, Brian K.; Deming, Michael; Biritwum, Nana-Kwadwo; Bougma, Windtaré R.; Dorkenoo, Améyo M.; El-Setouhy, Maged; Fischer, Peter U.; Gass, Katherine; Gonzalez de Peña, Manuel; Mercado-Hernandez, Leda; Kyelem, Dominique; Lammie, Patrick J.; Flueckiger, Rebecca M.; Mwingira, Upendo J.; Noordin, Rahmah; Offei Owusu, Irene; Ottesen, Eric A.; Pavluck, Alexandre; Pilotte, Nils; Rao, Ramakrishna U.; Samarasekera, Dilhani; Schmaedick, Mark A.; Settinayake, Sunil; Simonsen, Paul E.; Supali, Taniawati; Taleo, Fasihah; Torres, Melissa; Weil, Gary J.; Won, Kimberly Y.
2013-01-01
Background Lymphatic filariasis (LF) is targeted for global elimination through treatment of entire at-risk populations with repeated annual mass drug administration (MDA). Essential for program success is defining and confirming the appropriate endpoint for MDA when transmission is presumed to have reached a level low enough that it cannot be sustained even in the absence of drug intervention. Guidelines advanced by WHO call for a transmission assessment survey (TAS) to determine if MDA can be stopped within an LF evaluation unit (EU) after at least five effective rounds of annual treatment. To test the value and practicality of these guidelines, a multicenter operational research trial was undertaken in 11 countries covering various geographic and epidemiological settings. Methodology The TAS was conducted twice in each EU with TAS-1 and TAS-2 approximately 24 months apart. Lot quality assurance sampling (LQAS) formed the basis of the TAS survey design but specific EU characteristics defined the survey site (school or community), eligible population (6–7 year olds or 1st–2nd graders), survey type (systematic or cluster-sampling), target sample size, and critical cutoff (a statistically powered threshold below which transmission is expected to be no longer sustainable). The primary diagnostic tools were the immunochromatographic (ICT) test for W. bancrofti EUs and the BmR1 test (Brugia Rapid or PanLF) for Brugia spp. EUs. Principal Findings/Conclusions In 10 of 11 EUs, the number of TAS-1 positive cases was below the critical cutoff, indicating that MDA could be stopped. The same results were found in the follow-up TAS-2, therefore, confirming the previous decision outcome. Sample sizes were highly sex and age-representative and closely matched the target value after factoring in estimates of non-participation. The TAS was determined to be a practical and effective evaluation tool for stopping MDA although its validity for longer-term post-MDA surveillance requires further investigation. PMID:24340120
Chu, Brian K; Deming, Michael; Biritwum, Nana-Kwadwo; Bougma, Windtaré R; Dorkenoo, Améyo M; El-Setouhy, Maged; Fischer, Peter U; Gass, Katherine; Gonzalez de Peña, Manuel; Mercado-Hernandez, Leda; Kyelem, Dominique; Lammie, Patrick J; Flueckiger, Rebecca M; Mwingira, Upendo J; Noordin, Rahmah; Offei Owusu, Irene; Ottesen, Eric A; Pavluck, Alexandre; Pilotte, Nils; Rao, Ramakrishna U; Samarasekera, Dilhani; Schmaedick, Mark A; Settinayake, Sunil; Simonsen, Paul E; Supali, Taniawati; Taleo, Fasihah; Torres, Melissa; Weil, Gary J; Won, Kimberly Y
2013-01-01
Lymphatic filariasis (LF) is targeted for global elimination through treatment of entire at-risk populations with repeated annual mass drug administration (MDA). Essential for program success is defining and confirming the appropriate endpoint for MDA when transmission is presumed to have reached a level low enough that it cannot be sustained even in the absence of drug intervention. Guidelines advanced by WHO call for a transmission assessment survey (TAS) to determine if MDA can be stopped within an LF evaluation unit (EU) after at least five effective rounds of annual treatment. To test the value and practicality of these guidelines, a multicenter operational research trial was undertaken in 11 countries covering various geographic and epidemiological settings. The TAS was conducted twice in each EU with TAS-1 and TAS-2 approximately 24 months apart. Lot quality assurance sampling (LQAS) formed the basis of the TAS survey design but specific EU characteristics defined the survey site (school or community), eligible population (6-7 year olds or 1(st)-2(nd) graders), survey type (systematic or cluster-sampling), target sample size, and critical cutoff (a statistically powered threshold below which transmission is expected to be no longer sustainable). The primary diagnostic tools were the immunochromatographic (ICT) test for W. bancrofti EUs and the BmR1 test (Brugia Rapid or PanLF) for Brugia spp. EUs. In 10 of 11 EUs, the number of TAS-1 positive cases was below the critical cutoff, indicating that MDA could be stopped. The same results were found in the follow-up TAS-2, therefore, confirming the previous decision outcome. Sample sizes were highly sex and age-representative and closely matched the target value after factoring in estimates of non-participation. The TAS was determined to be a practical and effective evaluation tool for stopping MDA although its validity for longer-term post-MDA surveillance requires further investigation.
NASA Astrophysics Data System (ADS)
Salerno, Antonio; de la Fuente, Isabel; Hsu, Zack; Tai, Alan; Chang, Hammer; McNamara, Elliott; Cramer, Hugo; Li, Daoping
2018-03-01
In next generation Logic devices, overlay control requirements shrink to sub 2.5nm level on-product overlay. Historically on-product overlay has been defined by the overlay capability of after-develop in-scribe targets. However, due to design and dimension, the after development metrology targets are not completely representative for the final overlay of the device. In addition, they are confined to the scribe-lane area, which limits the sampling possibilities. To address these two issues, metrology on structures matching the device structure and which can be sampled with high density across the device is required. Conventional after-etch CDSEM techniques on logic devices present difficulties in discerning the layers of interest, potential destructive charging effects and finally, they are limited by the long measurement times[1] [2] [3] . All together, limit the sampling densities and making CDSEM less attractive for control applications. Optical metrology can overcome most of these limitations. Such measurement, however, does require repetitive structures. This requirement is not fulfilled by logic devices, as the features vary in pitch and CD over the exposure field. The solution is to use small targets, with a maximum pad size of 5x5um2 , which can easily be placed in the logic cell area. These targets share the process and architecture of the device features of interest, but with a modified design that replicates as close as possible the device layout, allowing for in-device metrology for both CD and Overlay. This solution enables measuring closer to the actual product feature location and, not being limited to scribe-lanes, it opens the possibility of higher-density sampling schemes across the field. In summary, these targets become the facilitator of in-device metrology (IDM), that is, enabling the measurements both in-device Overlay and the CD parameters of interest and can deliver accurate, high-throughput, dense and after-etch measurements for Logic. Overlay improvements derived from a high-densely sampled Overlay map measured with 5x5 um2 In Device Metrology (IDM) targets were investigated on a customer Logic application. In this work we present both the main design aspects of the 5x5 um2 IDM targets, as well as the results on the improved Overlay performance.
NASA Astrophysics Data System (ADS)
Denmark, Daniel J.
Conventional therapeutic techniques treat the patient by delivering a biotherapeutic to the entire body rather than the target tissue. In the case of chemotherapy, the biotherapeutic is a drug that kills healthy and diseased cells indiscriminately which can lead to undesirable side effects. With targeted delivery, biotherapeutics can be delivered directly to the diseased tissue significantly reducing exposure to otherwise healthy tissue. Typical composite delivery devices are minimally composed of a stimuli responsive polymer, such as poly(N-isopropylacrylamide), allowing for triggered release when heated beyond approximately 32 °C, and magnetic nanoparticles which enable targeting as well as provide a mechanism for stimulus upon alternating magnetic field heating. Although more traditional methods, such as emulsion polymerization, have been used to realize these composite devices, the synthesis is problematic. Poisonous surfactants that are necessary to prevent agglomeration must be removed from the finished polymer, increasing the time and cost of the process. This study seeks to further explore non-toxic, biocompatible, non-residual, photochemical methods of creating stimuli responsive nanogels to advance the targeted biotherapeutic delivery field. Ultraviolet photopolymerization promises to be more efficient, while ensuring safety by using only biocompatible substances. The reactants selected for nanogel fabrication were N -isopropylacrylamide as monomer, methylene bisacrylamide as cross-linker, and Irgacure 2959 as ultraviolet photo-initiator. The superparamagnetic nanoparticles for encapsulation were approximately 10 nm in diameter and composed of magnetite to enable remote delivery and enhanced triggered release properties. Early investigations into the interactions of the polymer and nanoparticles employ a pioneering experimental setup, which allows for coincident turbidimetry and alternating magnetic field heating of an aqueous solution containing both materials. Herein, a low-cost, scalable, and rapid, custom ultraviolet photo-reactor with in-situ, spectroscopic monitoring system is used to observe the synthesis as the sample undergoes photopolymerization. This method also allows in-situ encapsulation of the magnetic nanoparticles simplifying the process. Size characterization of the resulting nanogels was performed by Transmission Electron Microscopy revealing size-tunable nanogel spheres between 50 and 800 nm by varying the ratio and concentration of the reactants. Nano-Tracking Analysis indicates that the nanogels exhibit minimal agglomeration as well as provides a temperature-dependent particle size distribution. Optical characterization utilized Fourier Transform Infrared and Ultraviolet Spectroscopy to confirm successful polymerization. When samples of the nanogels encapsulating magnetic nanoparticles were subjected to an alternating magnetic field a temperature increase was observed indicating that triggered release is possible. Furthermore, a model, based on linear response theory that innovatively utilizes size distribution data, is presented to explain alternating magnetic field heating results. The results presented here will advance targeted biotherapeutic delivery and have a wide range of applications in medical sciences like oncology, gene delivery, cardiology and endocrinology.
Improvement of Predictive Ability by Uniform Coverage of the Target Genetic Space
Bustos-Korts, Daniela; Malosetti, Marcos; Chapman, Scott; Biddulph, Ben; van Eeuwijk, Fred
2016-01-01
Genome-enabled prediction provides breeders with the means to increase the number of genotypes that can be evaluated for selection. One of the major challenges in genome-enabled prediction is how to construct a training set of genotypes from a calibration set that represents the target population of genotypes, where the calibration set is composed of a training and validation set. A random sampling protocol of genotypes from the calibration set will lead to low quality coverage of the total genetic space by the training set when the calibration set contains population structure. As a consequence, predictive ability will be affected negatively, because some parts of the genotypic diversity in the target population will be under-represented in the training set, whereas other parts will be over-represented. Therefore, we propose a training set construction method that uniformly samples the genetic space spanned by the target population of genotypes, thereby increasing predictive ability. To evaluate our method, we constructed training sets alongside with the identification of corresponding genomic prediction models for four genotype panels that differed in the amount of population structure they contained (maize Flint, maize Dent, wheat, and rice). Training sets were constructed using uniform sampling, stratified-uniform sampling, stratified sampling and random sampling. We compared these methods with a method that maximizes the generalized coefficient of determination (CD). Several training set sizes were considered. We investigated four genomic prediction models: multi-locus QTL models, GBLUP models, combinations of QTL and GBLUPs, and Reproducing Kernel Hilbert Space (RKHS) models. For the maize and wheat panels, construction of the training set under uniform sampling led to a larger predictive ability than under stratified and random sampling. The results of our methods were similar to those of the CD method. For the rice panel, all training set construction methods led to similar predictive ability, a reflection of the very strong population structure in this panel. PMID:27672112
NASA Astrophysics Data System (ADS)
Huntington, B. E.; Lirman, D.
2012-12-01
Landscape-scale attributes of patch size, spatial isolation, and topographic complexity are known to influence diversity and abundance in terrestrial and marine systems, but remain collectively untested for reef-building corals. To investigate the relationship between the coral assemblage and seascape variation in reef habitats, we took advantage of the distinct boundaries, spatial configurations, and topographic complexities among artificial reef patches to overcome the difficulties of manipulating natural reefs. Reef size (m2) was found to be the foremost predictor of coral richness in accordance with species-area relationship predictions. Larger reefs were also found to support significantly higher colony densities, enabling us to reject the null hypothesis of random placement (a sampling artifact) in favor of target area predictions that suggest greater rates of immigration on larger reefs. Unlike the pattern previously documented for reef fishes, topographic complexity was not a significant predictor of any coral assemblage response variable, despite the range of complexity values sampled. Lastly, coral colony density was best explained by both increasing reef size and decreasing reef spatial isolation, a pattern found exclusively among brooding species with shorter larval dispersal distances. We conclude that seascape attributes of reef size and spatial configuration within the seascape can influence the species richness and abundance of the coral community at relatively small spatial scales (<1 km). Specifically, we demonstrate how patterns in the coral communities that have naturally established on these manipulated reefs agree with the target area and island biogeography mechanisms to drive species-area relationships in reef-building corals. Based on the patterns documented in artificial reefs, habitat degradation that results in smaller, more isolated natural reefs may compromise coral diversity.
NASA Astrophysics Data System (ADS)
Ploykrachang, K.; Hasegawa, J.; Kondo, K.; Fukuda, H.; Oguri, Y.
2014-07-01
We have developed a micro-XRF system based on a proton-induced quasimonochromatic X-ray (QMXR) microbeam for in vivo measurement of biological samples. A 2.5-MeV proton beam impinged normally on a Cu foil target that was slightly thicker than the proton range. The emitted QMXR behind the Cu target was focused with a polycapillary X-ray half lens. For application to analysis of wet or aquatic samples, we prepared a QMXR beam with an incident angle of 45° with respect to the horizontal plane by using a dipole magnet in order to bend the primary proton beam downward by 45°. The focal spot size of the QMXR microbeam on a horizontal sample surface was evaluated to be 250 × 350 μm by a wire scanning method. A microscope camera with a long working distance was installed perpendicular to the sample surface to identify the analyzed position on the sample. The fluorescent radiation from the sample was collected by a Si-PIN photodiode X-ray detector. Using the setup above, we were able to successfully measure the accumulation and distribution of Co in the leaves of a free-floating aquatic plant on a dilute Co solution surface.
Maisano Delser, Pierpaolo; Corrigan, Shannon; Hale, Matthew; Li, Chenhong; Veuille, Michel; Planes, Serge; Naylor, Gavin; Mona, Stefano
2016-01-01
Population genetics studies on non-model organisms typically involve sampling few markers from multiple individuals. Next-generation sequencing approaches open up the possibility of sampling many more markers from fewer individuals to address the same questions. Here, we applied a target gene capture method to deep sequence ~1000 independent autosomal regions of a non-model organism, the blacktip reef shark (Carcharhinus melanopterus). We devised a sampling scheme based on the predictions of theoretical studies of metapopulations to show that sampling few individuals, but many loci, can be extremely informative to reconstruct the evolutionary history of species. We collected data from a single deme (SID) from Northern Australia and from a scattered sampling representing various locations throughout the Indian Ocean (SCD). We explored the genealogical signature of population dynamics detected from both sampling schemes using an ABC algorithm. We then contrasted these results with those obtained by fitting the data to a non-equilibrium finite island model. Both approaches supported an Nm value ~40, consistent with philopatry in this species. Finally, we demonstrate through simulation that metapopulations exhibit greater resilience to recent changes in effective size compared to unstructured populations. We propose an empirical approach to detect recent bottlenecks based on our sampling scheme. PMID:27651217
Maisano Delser, Pierpaolo; Corrigan, Shannon; Hale, Matthew; Li, Chenhong; Veuille, Michel; Planes, Serge; Naylor, Gavin; Mona, Stefano
2016-09-21
Population genetics studies on non-model organisms typically involve sampling few markers from multiple individuals. Next-generation sequencing approaches open up the possibility of sampling many more markers from fewer individuals to address the same questions. Here, we applied a target gene capture method to deep sequence ~1000 independent autosomal regions of a non-model organism, the blacktip reef shark (Carcharhinus melanopterus). We devised a sampling scheme based on the predictions of theoretical studies of metapopulations to show that sampling few individuals, but many loci, can be extremely informative to reconstruct the evolutionary history of species. We collected data from a single deme (SID) from Northern Australia and from a scattered sampling representing various locations throughout the Indian Ocean (SCD). We explored the genealogical signature of population dynamics detected from both sampling schemes using an ABC algorithm. We then contrasted these results with those obtained by fitting the data to a non-equilibrium finite island model. Both approaches supported an Nm value ~40, consistent with philopatry in this species. Finally, we demonstrate through simulation that metapopulations exhibit greater resilience to recent changes in effective size compared to unstructured populations. We propose an empirical approach to detect recent bottlenecks based on our sampling scheme.
Cosmogenic nuclides in football-sized rocks.
NASA Technical Reports Server (NTRS)
Wahlen, M.; Honda, M.; Imamura, M.; Fruchter, J. S.; Finkel, R. C.; Kohl, C. P.; Arnold, J. R.; Reedy, R. C.
1972-01-01
The activity of long- and short-lived isotopes in a series of samples from a vertical column through the center of rock 14321 was measured. Rock 14321 is a 9 kg fragmental rock whose orientation was photographically documented on the lunar surface. Also investigated was a sample from the lower portion of rock 14310, where, in order to study target effects, two different density fractions (mineral separates) were analyzed. A few nuclides in a sample from the comprehensive fines 14259 were measured. This material has been collected largely from the top centimeter of the lunar soil. The study of the deep samples of 14321 and 14310 provided values for the activity of isotopes at points where only effects produced by galactic cosmic rays are significant.
Preparation of highly multiplexed small RNA sequencing libraries.
Persson, Helena; Søkilde, Rolf; Pirona, Anna Chiara; Rovira, Carlos
2017-08-01
MicroRNAs (miRNAs) are ~22-nucleotide-long small non-coding RNAs that regulate the expression of protein-coding genes by base pairing to partially complementary target sites, preferentially located in the 3´ untranslated region (UTR) of target mRNAs. The expression and function of miRNAs have been extensively studied in human disease, as well as the possibility of using these molecules as biomarkers for prognostication and treatment guidance. To identify and validate miRNAs as biomarkers, their expression must be screened in large collections of patient samples. Here, we develop a scalable protocol for the rapid and economical preparation of a large number of small RNA sequencing libraries using dual indexing for multiplexing. Combined with the use of off-the-shelf reagents, more samples can be sequenced simultaneously on large-scale sequencing platforms at a considerably lower cost per sample. Sample preparation is simplified by pooling libraries prior to gel purification, which allows for the selection of a narrow size range while minimizing sample variation. A comparison with publicly available data from benchmarking of miRNA analysis platforms showed that this method captures absolute and differential expression as effectively as commercially available alternatives.
Flow field-flow fractionation for the analysis of nanoparticles used in drug delivery.
Zattoni, Andrea; Roda, Barbara; Borghi, Francesco; Marassi, Valentina; Reschiglian, Pierluigi
2014-01-01
Structured nanoparticles (NPs) with controlled size distribution and novel physicochemical features present fundamental advantages as drug delivery systems with respect to bulk drugs. NPs can transport and release drugs to target sites with high efficiency and limited side effects. Regulatory institutions such as the US Food and Drug Administration (FDA) and the European Commission have pointed out that major limitations to the real application of current nanotechnology lie in the lack of homogeneous, pure and well-characterized NPs, also because of the lack of well-assessed, robust routine methods for their quality control and characterization. Many properties of NPs are size-dependent, thus the particle size distribution (PSD) plays a fundamental role in determining the NP properties. At present, scanning and transmission electron microscopy (SEM, TEM) are among the most used techniques to size characterize NPs. Size-exclusion chromatography (SEC) is also applied to the size separation of complex NP samples. SEC selectivity is, however, quite limited for very large molar mass analytes such as NPs, and interactions with the stationary phase can alter NP morphology. Flow field-flow fractionation (F4) is increasingly used as a mature separation method to size sort and characterize NPs in native conditions. Moreover, the hyphenation with light scattering (LS) methods can enhance the accuracy of size analysis of complex samples. In this paper, the applications of F4-LS to NP analysis used as drug delivery systems for their size analysis, and the study of stability and drug release effects are reviewed. Copyright © 2013 Elsevier B.V. All rights reserved.
Ensminger, Michael P; Vasquez, Martice; Tsai, Hsing-Ju; Mohammed, Sarah; Van Scoy, A; Goodell, Korena; Cho, Gail; Goh, Kean S
2017-10-01
Monitoring of surface waters for organic contaminants is costly. Grab water sampling often results in non-detects for organic contaminants due to missing a pulse event or analytical instrumentation limitations with a small sample size. Continuous Low-Level Aquatic Monitoring (CLAM) samplers (C.I.Agent ® Solutions) continually extract and concentrate organic contaminants in surface water onto a solid phase extraction disk. Utilizing CLAM samplers, we developed a broad spectrum analytical screen for monitoring organic contaminants in urban runoff. An intermediate polarity solid phase, hydrophobic/lipophilic balance (HLB), was chosen as the sorbent for the CLAM to target a broad range of compounds. Eighteen urban-use pesticides and pesticide degradates were targeted for analysis by LC/MS/MS, with recoveries between 59 and 135% in laboratory studies. In field studies, CLAM samplers were deployed at discrete time points from February 2015 to March 2016. Half of the targeted chemicals were detected with reporting limits up to 90 times lower than routine 1-L grab samples with good precision between field replicates. In a final deployment, CLAM samplers were compared to 1-L water samples. In this side-by-side comparison, imidacloprid, fipronil, and three fipronil degradates were detected by the CLAM sampler but only imidacloprid and fipronil sulfone were detected in the water samples. However, concentrations of fipronil sulfone and imidacloprid were significantly lower with the CLAM and a transient spike of diuron was not detected. Although the CLAM sampler has limitations, it can be a powerful tool for development of more focused and informed monitoring efforts based on pre-identified targets in the field. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Moran, J.; Kelly, J.; Sams, R.; Newburn, M.; Kreuzer, H.; Alexander, M.
2011-12-01
Quick incorporation of IR spectroscopy based isotope measurements into cutting edge research in biogeochemical cycling attests to the advantages of a spectroscopy versus mass spectrometry method for making some 13C measurements. The simple principles of optical spectroscopy allow field portability and provide a more robust general platform for isotope measurements. We present results with a new capillary absorption spectrometer (CAS) with the capability of reducing the sample size required for high precision isotopic measurements to the picomolar level and potentially the sub-picomolar level. This work was motivated by the minute sample size requirements for laser ablation isotopic studies of carbon cycling in microbial communities but has potential to be a valuable tool in other areas of biological and geological research. The CAS instrument utilizes a capillary waveguide as a sample chamber for interrogating CO2 via near IR laser absorption spectroscopy. The capillary's small volume (~ 0.5 mL) combined with propagation and interaction of the laser mode with the entire sample reduces sample size requirements to a fraction of that accessible with commercially available IR absorption including those with multi-pass or ring-down cavity systems. Using a continuous quantum cascade laser system to probe nearly adjacent rovibrational transitions of different isotopologues of CO2 near 2307 cm-1 permits sample measurement at low analyte pressures (as low as 2 Torr) for further sensitivity improvement. A novel method to reduce cw-fringing noise in the hollow waveguide is presented, which allows weak absorbance features to be studied at the few ppm level after averaging 1,000 scans in 10 seconds. Detection limits down to the 20 picomoles have been observed, a concentration of approximately 400 ppm at 2 Torr in the waveguide with precision and accuracy at or better than 1 %. Improvements in detection and signal averaging electronics and laser power and mode quality are anticipated to reduce the required samples size to a 100-200 femtomoles of carbon. We report the application of the CAS system to a Laser Ablation-Catalytic-Combustion (LA-CC) micro-sampler system for selectively harvesting detailed sections of a solid surface for 13C analysis. This technique results in a three order of magnitude sensitivity improvement reported for our isotope measurement system compared to typical IRMS, providing new opportunities for making detailed investigations into wide ranges of microbial, physical, and chemical systems. The CAS is interfaced directly to the LA CC system currently operating at a 50 μm spatial resolution. We demonstrate that particulates produced by a Nd:YAG laser (λ=266nm) are isotopically homogenous with the parent material as measured by both IRMS and the CAS system. An improved laser ablation system operating at 193 nm with a spatial resolution of 2 microns or better is under development which will demonstrate the utility of the CAS system for sample sizes too low for IRMS. The improved sensitivities and optimized spatial targeting of such a system could interrogate targets as detailed as small cell clusters or intergrain organic deposits and could enhance ability to track biogeochemical carbon cycling.
Uddin, Rokon; Burger, Robert; Donolato, Marco; Fock, Jeppe; Creagh, Michael; Hansen, Mikkel Fougt; Boisen, Anja
2016-11-15
We present a biosensing platform for the detection of proteins based on agglutination of aptamer coated magnetic nano- or microbeads. The assay, from sample to answer, is integrated on an automated, low-cost microfluidic disc platform. This ensures fast and reliable results due to a minimum of manual steps involved. The detection of the target protein was achieved in two ways: (1) optomagnetic readout using magnetic nanobeads (MNBs); (2) optical imaging using magnetic microbeads (MMBs). The optomagnetic readout of agglutination is based on optical measurement of the dynamics of MNB aggregates whereas the imaging method is based on direct visualization and quantification of the average size of MMB aggregates. By enhancing magnetic particle agglutination via application of strong magnetic field pulses, we obtained identical limits of detection of 25pM with the same sample-to-answer time (15min 30s) using the two differently sized beads for the two detection methods. In both cases a sample volume of only 10µl is required. The demonstrated automation, low sample-to-answer time and portability of both detection instruments as well as integration of the assay on a low-cost disc are important steps for the implementation of these as portable tools in an out-of-lab setting. Copyright © 2016 Elsevier B.V. All rights reserved.
[Airborne Fungal Aerosol Concentration and Distribution Characteristics in Air- Conditioned Wards].
Zhang, Hua-ling; Feng, He-hua; Fang, Zi-liang; Wang, Ben-dong; Li, Dan
2015-04-01
The effects of airborne fungus on human health in the hospital environment are related to not only their genera and concentrations, but also their particle sizes and distribution characteristics. Moreover, the mechanisms of aerosols with different particle sizes on human health are different. Fungal samples were obtained in medicine wards of Chongqing using a six-stage sampler. The airborne fungal concentrations, genera and size distributions of all the sampling wards were investigated and identified in detail. Results showed that airborne fungal concentrations were not correlated to the diseases or personnel density, but were related to seasons, temperature, and relative humidity. The size distribution rule had roughly the same for testing wards in winter and summer. The size distributions were not related with diseases and seasons, the percentage of airborne fungal concentrations increased gradually from stage I to stage III, and then decreased dramatically from stage V to stage VI, in general, the size of airborne fungi was a normal distribution. There was no markedly difference for median diameter of airborne fungi which was less 3.19 μm in these wards. There were similar dominant genera in all wards. They were Aspergillus spp, Penicillium spp and Alternaria spp. Therefore, attention should be paid to improve the filtration efficiency of particle size of 1.1-4.7 μm for air conditioning system of wards. It also should be targeted to choose appropriate antibacterial methods and equipment for daily hygiene and air conditioning system operation management.
panelcn.MOPS: Copy-number detection in targeted NGS panel data for clinical diagnostics.
Povysil, Gundula; Tzika, Antigoni; Vogt, Julia; Haunschmid, Verena; Messiaen, Ludwine; Zschocke, Johannes; Klambauer, Günter; Hochreiter, Sepp; Wimmer, Katharina
2017-07-01
Targeted next-generation-sequencing (NGS) panels have largely replaced Sanger sequencing in clinical diagnostics. They allow for the detection of copy-number variations (CNVs) in addition to single-nucleotide variants and small insertions/deletions. However, existing computational CNV detection methods have shortcomings regarding accuracy, quality control (QC), incidental findings, and user-friendliness. We developed panelcn.MOPS, a novel pipeline for detecting CNVs in targeted NGS panel data. Using data from 180 samples, we compared panelcn.MOPS with five state-of-the-art methods. With panelcn.MOPS leading the field, most methods achieved comparably high accuracy. panelcn.MOPS reliably detected CNVs ranging in size from part of a region of interest (ROI), to whole genes, which may comprise all ROIs investigated in a given sample. The latter is enabled by analyzing reads from all ROIs of the panel, but presenting results exclusively for user-selected genes, thus avoiding incidental findings. Additionally, panelcn.MOPS offers QC criteria not only for samples, but also for individual ROIs within a sample, which increases the confidence in called CNVs. panelcn.MOPS is freely available both as R package and standalone software with graphical user interface that is easy to use for clinical geneticists without any programming experience. panelcn.MOPS combines high sensitivity and specificity with user-friendliness rendering it highly suitable for routine clinical diagnostics. © 2017 The Authors. Human Mutation published by Wiley Periodicals, Inc.
panelcn.MOPS: Copy‐number detection in targeted NGS panel data for clinical diagnostics
Povysil, Gundula; Tzika, Antigoni; Vogt, Julia; Haunschmid, Verena; Messiaen, Ludwine; Zschocke, Johannes; Klambauer, Günter; Wimmer, Katharina
2017-01-01
Abstract Targeted next‐generation‐sequencing (NGS) panels have largely replaced Sanger sequencing in clinical diagnostics. They allow for the detection of copy‐number variations (CNVs) in addition to single‐nucleotide variants and small insertions/deletions. However, existing computational CNV detection methods have shortcomings regarding accuracy, quality control (QC), incidental findings, and user‐friendliness. We developed panelcn.MOPS, a novel pipeline for detecting CNVs in targeted NGS panel data. Using data from 180 samples, we compared panelcn.MOPS with five state‐of‐the‐art methods. With panelcn.MOPS leading the field, most methods achieved comparably high accuracy. panelcn.MOPS reliably detected CNVs ranging in size from part of a region of interest (ROI), to whole genes, which may comprise all ROIs investigated in a given sample. The latter is enabled by analyzing reads from all ROIs of the panel, but presenting results exclusively for user‐selected genes, thus avoiding incidental findings. Additionally, panelcn.MOPS offers QC criteria not only for samples, but also for individual ROIs within a sample, which increases the confidence in called CNVs. panelcn.MOPS is freely available both as R package and standalone software with graphical user interface that is easy to use for clinical geneticists without any programming experience. panelcn.MOPS combines high sensitivity and specificity with user‐friendliness rendering it highly suitable for routine clinical diagnostics. PMID:28449315
NASA Astrophysics Data System (ADS)
Dunlop, Katherine M.; Jarvis, Toby; Benoit-Bird, Kelly J.; Waluk, Chad M.; Caress, David W.; Thomas, Hans; Smith, Kenneth L.
2018-04-01
Benthopelagic animals are an important component of the deep-sea ecosystem, yet are notoriously difficult to study. Multibeam echosounders (MBES) deployed on autonomous underwater vehicles (AUVs) represent a promising technology for monitoring this elusive fauna at relatively high spatial and temporal resolution. However, application of this remote-sensing technology to the study of small (relative to the sampling resolution), dispersed and mobile animals at depth does not come without significant challenges with respect to data collection, data processing and vessel avoidance. As a proof of concept, we used data from a downward-looking RESON SeaBat 7125 MBES mounted on a Dorado-class AUV to detect and characterise the location and movement of backscattering targets (which were likely to have been individual fish or squid) within 50 m of the seafloor at 800 m depth in Monterey Bay, California. The targets were detected and tracked, enabling their numerical density and movement to be characterised. The results revealed a consistent movement of targets downwards away from the AUV that we interpreted as an avoidance response. The large volume and complexity of the data presented a computational challenge, while reverberation and noise, spatial confounding and a marginal sampling resolution relative to the size of the targets caused difficulties for reliable and comprehensive target detection and tracking. Nevertheless, the results demonstrate that an AUV-mounted MBES has the potential to provide unique and detailed information on the in situ abundance, distribution, size and behaviour of both individual and aggregated deep-sea benthopelagic animals. We provide detailed data-processing information for those interested in working with MBES water-column data, and a critical appraisal of the data in the context of aquatic ecosystem research. We consider future directions for deep-sea water-column echosounding, and reinforce the importance of measures to mitigate vessel avoidance in studies of aquatic ecosystems.
A coronagraphic search for brown dwarfs around nearby stars
NASA Technical Reports Server (NTRS)
Nakajima, T.; Durrance, S. T.; Golimowski, D. A.; Kulkarni, S. R.
1994-01-01
Brown dwarf companions have been searched for around stars within 10 pc of the Sun using the Johns-Hopkins University Adaptive Optics Coronagraph (AOC), a stellar coronagraph with an image stabilizer. The AOC covers the field around the target star with a minimum search radius of 1 sec .5 and a field of view of 1 arcmin sq. We have reached an unprecedented dynamic range of Delta m = 13 in our search for faint companions at I band. Comparison of our survey with other brown dwarf searches shows that the AOC technique is unique in its dynamic range while at the same time just as sensitive to brown dwarfs as the recent brown dwarf surveys. The present survey covered 24 target stars selected from the Gliese catalog. A total of 94 stars were detected in 16 fields. The low-latitude fields are completely dominated by background star contamination. Kolmogorov-Smirnov tests were carried out for a sample restricted to high latitudes and a sample with small angular separations. The high-latitude sample (b greater than or equal to 44 deg) appears to show spatial concentration toward target stars. The small separation sample (Delta Theta less than 20 sec) shows weaker dependence on Galactic coordinates than field stars. These statistical tests suggest that both the high-latitude sample and the small separation sample can include a substantial fraction of true companions. However, the nature of these putative companions is mysterious. They are too faint to be white dwarfs and too blue for brown dwarfs. Ignoring the signif icance of the statistical tests, we can reconcile most of the detections with distant main-sequence stars or white dwarfs except for a candidate next to GL 475. Given the small size of our sample, we conclude that considerably more targets need to be surveyed before a firm conclusion on the possibility of a new class of companions can be made.
"V-junction": a novel structure for high-speed generation of bespoke droplet flows.
Ding, Yun; Casadevall i Solvas, Xavier; deMello, Andrew
2015-01-21
We present the use of microfluidic "V-junctions" as a droplet generation strategy that incorporates enhanced performance characteristics when compared to more traditional "T-junction" formats. This includes the ability to generate target-sized droplets from the very first one, efficient switching between multiple input samples, the production of a wide range of droplet sizes (and size gradients) and the facile generation of droplets with residence time gradients. Additionally, the use of V-junction droplet generators enables the suspension and subsequent resumption of droplet flows at times defined by the user. The high degree of operational flexibility allows a wide range of droplet sizes, payloads, spacings and generation frequencies to be obtained, which in turn provides for an enhanced design space for droplet-based experimentation. We show that the V-junction retains the simplicity of operation associated with T-junction formats, whilst offering functionalities normally associated with droplet-on-demand technologies.
Ultrasonic Porosity Estimation of Low-Porosity Ceramic Samples
NASA Astrophysics Data System (ADS)
Eskelinen, J.; Hoffrén, H.; Kohout, T.; Hæggström, E.; Pesonen, L. J.
2007-03-01
We report on efforts to extend the applicability of an airborne ultrasonic pulse-reflection (UPR) method towards lower porosities. UPR is a method that has been used successfully to estimate porosity and tortuosity of high porosity foams. UPR measures acoustical reflectivity of a target surface at two or more incidence angles. We used ceramic samples to evaluate the feasibility of extending the UPR range into low porosities (<35%). The validity of UPR estimates depends on pore size distribution and probing frequency as predicted by the theoretical boundary conditions of the used equivalent fluid model under the high-frequency approximation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzke, Brett D.; Wilson, John E.; Hathaway, J.
2008-02-12
Statistically defensible methods are presented for developing geophysical detector sampling plans and analyzing data for munitions response sites where unexploded ordnance (UXO) may exist. Detection methods for identifying areas of elevated anomaly density from background density are shown. Additionally, methods are described which aid in the choice of transect pattern and spacing to assure with degree of confidence that a target area (TA) of specific size, shape, and anomaly density will be identified using the detection methods. Methods for evaluating the sensitivity of designs to variation in certain parameters are also discussed. Methods presented have been incorporated into the Visualmore » Sample Plan (VSP) software (free at http://dqo.pnl.gov/vsp) and demonstrated at multiple sites in the United States. Application examples from actual transect designs and surveys from the previous two years are demonstrated.« less
Kinematic measurement from panned cinematography.
Gervais, P; Bedingfield, E W; Wronko, C; Kollias, I; Marchiori, G; Kuntz, J; Way, N; Kuiper, D
1989-06-01
Traditional 2-D cinematography has used a stationary camera with its optical axis perpendicular to the plane of motion. This method has constrained the size of the object plane or has introduced potential errors from a small subject image size with large object field widths. The purpose of this study was to assess a panning technique that could overcome the inherent limitations of small object field widths, small object image sizes and limited movement samples. The proposed technique used a series of reference targets in the object field that provided the necessary scales and origin translations. A 102 m object field was panned. Comparisons between criterion distances and film measured distances for field widths of 46 m and 22 m resulted in absolute mean differences that were comparable to that of the traditional method.
Skeletal and Clinical Effects of Exoskeletal Assisted - Gait
2016-10-01
assisted gait and derive estimates of loads applied to the bones. The research team continues to recruit study volunteers . 15. SUBJECT TERMS biomechanical...subjects. As we submit this annual report, we continue to actively schedule study volunteers for screening in order to reach the target sample size for... volunteers and apply the biomechanical models developed so far to the datasets that will be collected from study volunteers . We anticipate continuing to
Marotta, Phillip L.; Voisin, Dexter R.
2017-01-01
Objective Mounting literature suggests that parental monitoring, risky peer norms, and future orientation correlate with illicit drug use and delinquency. However, few studies have investigated these constructs simultaneously in a single statistical model with low income African American youth. This study examined parental monitoring, peer norms and future orientation as primary pathways to drug use and delinquent behaviors in a large sample of African American urban adolescents. Methods A path model tested direct paths from peer norms, parental monitoring, and future orientation to drug use and delinquency outcomes after adjusting for potential confounders such as age, socioeconomic, and sexual orientation in a sample of 541 African American youth. Results Greater scores on measures of risky peer norms were associated with heightened risk of delinquency with an effect size that was twice in magnitude compared to the protective effects of future orientation. Regarding substance use, greater perceived risky peer norms correlated with the increased likelihood of substance use with a standardized effect size 3.33 times in magnitude compared to the protective effects of parental monitoring. Conclusions Findings from this study suggest that interventions targeting risky peer norms among adolescent African American youth may correlate with a greater impact on reductions in substance use and delinquency than exclusively targeting parental monitoring or future orientation. PMID:28974824
Marotta, Phillip L; Voisin, Dexter R
2017-04-01
Mounting literature suggests that parental monitoring, risky peer norms, and future orientation correlate with illicit drug use and delinquency. However, few studies have investigated these constructs simultaneously in a single statistical model with low income African American youth. This study examined parental monitoring, peer norms and future orientation as primary pathways to drug use and delinquent behaviors in a large sample of African American urban adolescents. A path model tested direct paths from peer norms, parental monitoring, and future orientation to drug use and delinquency outcomes after adjusting for potential confounders such as age, socioeconomic, and sexual orientation in a sample of 541 African American youth. Greater scores on measures of risky peer norms were associated with heightened risk of delinquency with an effect size that was twice in magnitude compared to the protective effects of future orientation. Regarding substance use, greater perceived risky peer norms correlated with the increased likelihood of substance use with a standardized effect size 3.33 times in magnitude compared to the protective effects of parental monitoring. Findings from this study suggest that interventions targeting risky peer norms among adolescent African American youth may correlate with a greater impact on reductions in substance use and delinquency than exclusively targeting parental monitoring or future orientation.
Comet nucleus and asteroid sample return missions
NASA Technical Reports Server (NTRS)
Melton, Robert G.; Thompson, Roger C.; Starchville, Thomas F., Jr.; Adams, C.; Aldo, A.; Dobson, K.; Flotta, C.; Gagliardino, J.; Lear, M.; Mcmillan, C.
1992-01-01
During the 1991-92 academic year, the Pennsylvania State University has developed three sample return missions: one to the nucleus of comet Wild 2, one to the asteroid Eros, and one to three asteroids located in the Main Belt. The primary objective of the comet nucleus sample return mission is to rendezvous with a short period comet and acquire a 10 kg sample for return to Earth. Upon rendezvous with the comet, a tethered coring and sampler drill will contact the surface and extract a two-meter core sample from the target site. Before the spacecraft returns to Earth, a monitoring penetrator containing scientific instruments will be deployed for gathering long-term data about the comet. A single asteroid sample return mission to the asteroid 433 Eros (chosen for proximity and launch opportunities) will extract a sample from the asteroid surface for return to Earth. To limit overall mission cost, most of the mission design uses current technologies, except the sampler drill design. The multiple asteroid sample return mission could best be characterized through its use of future technology including an optical communications system, a nuclear power reactor, and a low-thrust propulsion system. A low-thrust trajectory optimization code (QuickTop 2) obtained from the NASA LeRC helped in planning the size of major subsystem components, as well as the trajectory between targets.
Beno, Sarah M; Stasiewicz, Matthew J; Andrus, Alexis D; Ralyea, Robert D; Kent, David J; Martin, Nicole H; Wiedmann, Martin; Boor, Kathryn J
2016-12-01
Pathogen environmental monitoring programs (EMPs) are essential for food processing facilities of all sizes that produce ready-to-eat food products exposed to the processing environment. We developed, implemented, and evaluated EMPs targeting Listeria spp. and Salmonella in nine small cheese processing facilities, including seven farmstead facilities. Individual EMPs with monthly sample collection protocols were designed specifically for each facility. Salmonella was detected in only one facility, with likely introduction from the adjacent farm indicated by pulsed-field gel electrophoresis data. Listeria spp. were isolated from all nine facilities during routine sampling. The overall Listeria spp. (other than Listeria monocytogenes ) and L. monocytogenes prevalences in the 4,430 environmental samples collected were 6.03 and 1.35%, respectively. Molecular characterization and subtyping data suggested persistence of a given Listeria spp. strain in seven facilities and persistence of L. monocytogenes in four facilities. To assess routine sampling plans, validation sampling for Listeria spp. was performed in seven facilities after at least 6 months of routine sampling. This validation sampling was performed by independent individuals and included collection of 50 to 150 samples per facility, based on statistical sample size calculations. Two of the facilities had a significantly higher frequency of detection of Listeria spp. during the validation sampling than during routine sampling, whereas two other facilities had significantly lower frequencies of detection. This study provides a model for a science- and statistics-based approach to developing and validating pathogen EMPs.
Fowler, Dawnovise N; Faulkner, Monica
2011-12-01
In this article, meta-analytic techniques are used to examine existing intervention studies (n = 11) to determine their effects on substance abuse among female samples of intimate partner abuse (IPA) survivors. This research serves as a starting point for greater attention in research and practice to the implementation of evidence-based, integrated services to address co-occurring substance abuse and IPA victimization among women as major intersecting public health problems. The results show greater effects in three main areas. First, greater effect sizes exist in studies where larger numbers of women experienced current IPA. Second, studies with a lower mean age also showed greater effect sizes than studies with a higher mean age. Lastly, studies with smaller sample sizes have greater effects. This research helps to facilitate cohesion in the knowledge base on this topic, and the findings of this meta-analysis, in particular, contribute needed information to gaps in the literature on the level of promise of existing interventions to impact substance abuse in this underserved population. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Tian, Biao; Liu, Yang; Xu, Shiyou; Chen, Zengping
2014-01-01
Interferometric inverse synthetic aperture radar (InISAR) imaging provides complementary information to monostatic inverse synthetic aperture radar (ISAR) imaging. This paper proposes a new InISAR imaging system for space targets based on wideband direct sampling using two antennas. The system is easy to realize in engineering since the motion trajectory of space targets can be known in advance, which is simpler than that of three receivers. In the preprocessing step, high speed movement compensation is carried out by designing an adaptive matched filter containing speed that is obtained from the narrow band information. Then, the coherent processing and keystone transform for ISAR imaging are adopted to reserve the phase history of each antenna. Through appropriate collocation of the system, image registration and phase unwrapping can be avoided. Considering the situation not to be satisfied, the influence of baseline variance is analyzed and compensation method is adopted. The corresponding size can be achieved by interferometric processing of the two complex ISAR images. Experimental results prove the validity of the analysis and the three-dimensional imaging algorithm.
Time-reversal optical tomography: detecting and locating extended targets in a turbid medium
NASA Astrophysics Data System (ADS)
Wu, Binlin; Cai, W.; Xu, M.; Gayen, S. K.
2012-03-01
Time Reversal Optical Tomography (TROT) is developed to locate extended target(s) in a highly scattering turbid medium, and estimate their optical strength and size. The approach uses Diffusion Approximation of Radiative Transfer Equation for light propagation along with Time Reversal (TR) Multiple Signal Classification (MUSIC) scheme for signal and noise subspaces for assessment of target location. A MUSIC pseudo spectrum is calculated using the eigenvectors of the TR matrix T, whose poles provide target locations. Based on the pseudo spectrum contours, retrieval of target size is modeled as an optimization problem, using a "local contour" method. The eigenvalues of T are related to optical strengths of targets. The efficacy of TROT to obtain location, size, and optical strength of one absorptive target, one scattering target, and two absorptive targets, all for different noise levels was tested using simulated data. Target locations were always accurately determined. Error in optical strength estimates was small even at 20% noise level. Target size and shape were more sensitive to noise. Results from simulated data demonstrate high potential for application of TROT in practical biomedical imaging applications.
Contrast, size, and orientation-invariant target detection in infrared imagery
NASA Astrophysics Data System (ADS)
Zhou, Yi-Tong; Crawshaw, Richard D.
1991-08-01
Automatic target detection in IR imagery is a very difficult task due to variations in target brightness, shape, size, and orientation. In this paper, the authors present a contrast, size, and orientation invariant algorithm based on Gabor functions for detecting targets from a single IR image frame. The algorithms consists of three steps. First, it locates potential targets by using low-resolution Gabor functions which resist noise and background clutter effects, then, it removes false targets and eliminates redundant target points based on a similarity measure. These two steps mimic human vision processing but are different from Zeevi's Foveating Vision System. Finally, it uses both low- and high-resolution Gabor functions to verify target existence. This algorithm has been successfully tested on several IR images that contain multiple examples of military vehicles with different size and brightness in various background scenes and orientations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false Small Business Size Representation for Targeted Industry Categories Under the Small Business Competitiveness Demonstration Program....219-21 Small Business Size Representation for Targeted Industry Categories Under the Small Business...
Krupka, Kenneth M; Parkhurst, Mary Ann; Gold, Kenneth; Arey, Bruce W; Jenson, Evan D; Guilmette, Raymond A
2009-03-01
The impact of depleted uranium (DU) penetrators against an armored target causes erosion and fragmentation of the penetrators, the extent of which is dependent on the thickness and material composition of the target. Vigorous oxidation of the DU particles and fragments creates an aerosol of DU oxide particles and DU particle agglomerations combined with target materials. Aerosols from the Capstone DU aerosol study, in which vehicles were perforated by DU penetrators, were evaluated for their oxidation states using x-ray diffraction (XRD), and particle morphologies were examined using scanning electron microscopy/energy dispersive spectroscopy (SEM/EDS). The oxidation state of a DU aerosol is important as it offers a clue to its solubility in lung fluids. The XRD analysis showed that the aerosols evaluated were a combination primarily of U3O8 (insoluble) and UO3 (relatively more soluble) phases, though intermediate phases resembling U4O9 and other oxides were prominent in some samples. Analysis of particle residues in the micrometer-size range by SEM/EDS provided microstructural information such as phase composition and distribution, fracture morphology, size distribution, and material homogeneity. Observations from SEM analysis show a wide variability in the shapes of the DU particles. Some of the larger particles were spherical, occasionally with dendritic or lobed surface structures. Others appear to have fractures that perhaps resulted from abrasion and comminution, or shear bands that developed from plastic deformation of the DU material. Amorphous conglomerates containing metals other than uranium were also common, especially with the smallest particle sizes. A few samples seemed to contain small bits of nearly pure uranium metal, which were verified by EDS to have a higher uranium content exceeding that expected for uranium oxides. Results of the XRD and SEM/EDS analyses were used in other studies described in this issue of Health Physics to interpret the results of lung solubility studies and in selecting input parameters for dose assessments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krupka, Kenneth M.; Parkhurst, MaryAnn; Gold, Kenneth
2009-03-01
The impact of depleted uranium (DU) penetrators against an armored target causes erosion and fragmentation of the penetrators, the extent of which is dependent on the thickness and material composition of the target. Vigorous oxidation of the DU particles and fragments creates an aerosol of DU oxide particles and DU particle agglomerations combined with target materials. Aerosols from the Capstone DU aerosol study, in which vehicles were perforated by DU penetrators, were evaluated for their oxidation states using X-ray diffraction (XRD) and particle morphologies using scanning electron microscopy/energy dispersive spectrometry (SEM/EDS). The oxidation state of a DU aerosol is importantmore » as it offers a clue to its solubility in lung fluids. The XRD analysis showed that the aerosols evaluated were a combination primarily of U3O8 (insoluble) and UO3 (relatively more soluble) phases, though intermediate phases resembling U4O9 and other oxides were prominent in some samples. Analysis of particle residues in the micrometer-size range by SEM/EDS provided microstructural information such as phase composition and distribution, fracture morphology, size distribution, and material homogeneity. Observations from SEM analysis show a wide variability in the shapes of the DU particles. Some of the larger particles appear to have been fractured (perhaps as a result of abrasion and comminution); others were spherical, occasionally with dendritic or lobed surface structures. Amorphous conglomerates containing metals other than uranium were also common, especially with the smallest particle sizes. A few samples seemed to contain small chunks of nearly pure uranium metal, which were verified by EDS to have a higher uranium content exceeding that expected for uranium oxides. Results of the XRD and SEM/EDS analyses were used in other studies described in this issue of The Journal of Health Physics to interpret the results of lung solubility studies and in selecting input parameters for dose assessments.« less
Racial bias in judgments of physical size and formidability: From size to threat.
Wilson, John Paul; Hugenberg, Kurt; Rule, Nicholas O
2017-07-01
Black men tend to be stereotyped as threatening and, as a result, may be disproportionately targeted by police even when unarmed. Here, we found evidence that biased perceptions of young Black men's physical size may play a role in this process. The results of 7 studies showed that people have a bias to perceive young Black men as bigger (taller, heavier, more muscular) and more physically threatening (stronger, more capable of harm) than young White men. Both bottom-up cues of racial prototypicality and top-down information about race supported these misperceptions. Furthermore, this racial bias persisted even among a target sample from whom upper-body strength was controlled (suggesting that racial differences in formidability judgments are a product of bias rather than accuracy). Biased formidability judgments in turn promoted participants' justifications of hypothetical use of force against Black suspects of crime. Thus, perceivers appear to integrate multiple pieces of information to ultimately conclude that young Black men are more physically threatening than young White men, believing that they must therefore be controlled using more aggressive measures. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Lanata, C F; Black, R E
1991-01-01
Traditional survey methods, which are generally costly and time-consuming, usually provide information at the regional or national level only. The utilization of lot quality assurance sampling (LQAS) methodology, developed in industry for quality control, makes it possible to use small sample sizes when conducting surveys in small geographical or population-based areas (lots). This article describes the practical use of LQAS for conducting health surveys to monitor health programmes in developing countries. Following a brief description of the method, the article explains how to build a sample frame and conduct the sampling to apply LQAS under field conditions. A detailed description of the procedure for selecting a sampling unit to monitor the health programme and a sample size is given. The sampling schemes utilizing LQAS applicable to health surveys, such as simple- and double-sampling schemes, are discussed. The interpretation of the survey results and the planning of subsequent rounds of LQAS surveys are also discussed. When describing the applicability of LQAS in health surveys in developing countries, the article considers current limitations for its use by health planners in charge of health programmes, and suggests ways to overcome these limitations through future research. It is hoped that with increasing attention being given to industrial sampling plans in general, and LQAS in particular, their utilization to monitor health programmes will provide health planners in developing countries with powerful techniques to help them achieve their health programme targets.
A New On-the-Fly Sampling Method for Incoherent Inelastic Thermal Neutron Scattering Data in MCNP6
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlou, Andrew Theodore; Brown, Forrest B.; Ji, Wei
2014-09-02
At thermal energies, the scattering of neutrons in a system is complicated by the comparable velocities of the neutron and target, resulting in competing upscattering and downscattering events. The neutron wavelength is also similar in size to the target's interatomic spacing making the scattering process a quantum mechanical problem. Because of the complicated nature of scattering at low energies, the thermal data files in ACE format used in continuous-energy Monte Carlo codes are quite large { on the order of megabytes for a single temperature and material. In this paper, a new storage and sampling method is introduced that ismore » orders of magnitude less in size and is used to sample scattering parameters at any temperature on-the-fly. In addition to the reduction in storage, the need to pre-generate thermal scattering data tables at fine temperatures has been eliminated. This is advantageous for multiphysics simulations which may involve temperatures not known in advance. A new module was written for MCNP6 that bypasses the current S(α,β) table lookup in favor of the new format. The new on-the-fly sampling method was tested for graphite for two benchmark problems at ten temperatures: 1) an eigenvalue test with a fuel compact of uranium oxycarbide fuel homogenized into a graphite matrix, 2) a surface current test with a \\broomstick" problem with a monoenergetic point source. The largest eigenvalue difference was 152pcm for T= 1200K. For the temperatures and incident energies chosen for the broomstick problem, the secondary neutron spectrum showed good agreement with the traditional S(α,β) sampling method. These preliminary results show that sampling thermal scattering data on-the-fly is a viable option to eliminate both the storage burden of keeping thermal data at discrete temperatures and the need to know temperatures before simulation runtime.« less
Wang, Sue-Jane; O'Neill, Robert T; Hung, Hm James
2010-10-01
The current practice for seeking genomically favorable patients in randomized controlled clinical trials using genomic convenience samples. To discuss the extent of imbalance, confounding, bias, design efficiency loss, type I error, and type II error that can occur in the evaluation of the convenience samples, particularly when they are small samples. To articulate statistical considerations for a reasonable sample size to minimize the chance of imbalance, and, to highlight the importance of replicating the subgroup finding in independent studies. Four case examples reflecting recent regulatory experiences are used to underscore the problems with convenience samples. Probability of imbalance for a pre-specified subgroup is provided to elucidate sample size needed to minimize the chance of imbalance. We use an example drug development to highlight the level of scientific rigor needed, with evidence replicated for a pre-specified subgroup claim. The convenience samples evaluated ranged from 18% to 38% of the intent-to-treat samples with sample size ranging from 100 to 5000 patients per arm. The baseline imbalance can occur with probability higher than 25%. Mild to moderate multiple confounders yielding the same directional bias in favor of the treated group can make treatment group incomparable at baseline and result in a false positive conclusion that there is a treatment difference. Conversely, if the same directional bias favors the placebo group or there is loss in design efficiency, the type II error can increase substantially. Pre-specification of a genomic subgroup hypothesis is useful only for some degree of type I error control. Complete ascertainment of genomic samples in a randomized controlled trial should be the first step to explore if a favorable genomic patient subgroup suggests a treatment effect when there is no clear prior knowledge and understanding about how the mechanism of a drug target affects the clinical outcome of interest. When stratified randomization based on genomic biomarker status cannot be implemented in designing a pharmacogenomics confirmatory clinical trial, if there is one genomic biomarker prognostic for clinical response, as a general rule of thumb, a sample size of at least 100 patients may be needed to be considered for the lower prevalence genomic subgroup to minimize the chance of an imbalance of 20% or more difference in the prevalence of the genomic marker. The sample size may need to be at least 150, 350, and 1350, respectively, if an imbalance of 15%, 10% and 5% difference is of concern.
The (un)reliability of item-level semantic priming effects.
Heyman, Tom; Bruninx, Anke; Hutchison, Keith A; Storms, Gert
2018-04-05
Many researchers have tried to predict semantic priming effects using a myriad of variables (e.g., prime-target associative strength or co-occurrence frequency). The idea is that relatedness varies across prime-target pairs, which should be reflected in the size of the priming effect (e.g., cat should prime dog more than animal does). However, it is only insightful to predict item-level priming effects if they can be measured reliably. Thus, in the present study we examined the split-half and test-retest reliabilities of item-level priming effects under conditions that should discourage the use of strategies. The resulting priming effects proved extremely unreliable, and reanalyses of three published priming datasets revealed similar cases of low reliability. These results imply that previous attempts to predict semantic priming were unlikely to be successful. However, one study with an unusually large sample size yielded more favorable reliability estimates, suggesting that big data, in terms of items and participants, should be the future for semantic priming research.
Stanton, Cynthia; Nand, Deepak Nitya; Koski, Alissa; Mirzabagi, Ellie; Brooke, Steve; Grady, Breanne; Mullany, Luke C
2014-11-13
Surveillance of drug quality for antibiotics, antiretrovirals, antimalarials and vaccines is better established than surveillance for maternal health drugs in low-income countries, particularly uterotonic drugs for the prevention and treatment of postpartum hemorrhage. The objectives of this study are to: assess private sector accessibility of four drugs used for uterotonic purposes (oxytocin, methylergometrine, misoprostol, valethamate bromide); and to assess potency of oxytocin and methylergometrine ampoules purchased by simulated clients. The study was conducted in Hassan and Bagalkot districts in Karnataka state and Agra and Gorakhpur districts in Uttar Pradesh state. A sample of 877 private pharmacies was selected (using a stratified, systematic sampling with random start), among which 847 were successfully visited. The target sample size for assessment of accessibility was 50 pharmacies per drug, per district. The target sample size for potency assessment was 100 purchases each of oxytocin and methylergometrine across all districts. Successful drug purchases varied by state. In Agra and Gorakhpur, 90%-100% of visits for each of the drugs resulted in a purchase. In Bagalkot and Hassan, only 29%-52% of visits for each drug resulted in a purchase. Regarding potency, the percent of active pharmaceutical ingredient was assessed using United States Pharmacopeia monograph #33 for both drugs; 193 and 188 ampoules of oxytocin and methylergometrine, respectively, were assessed. The percent of oxytocin ampoules outside manufacturer specification ranged from 33%-40% in Karnataka and from 22%-50% in Uttar Pradesh. In Bagalkot and Hassan, 96% and 100% of the methylergometrine ampoules were outside manufacturer specification, respectively. In Agra and Gorakhpur, 54% and 44% were outside manufacturer specification, respectively. Private sector accessibility of uterotonic drugs in study districts in Karnataka warrants attention. Most importantly, interventions to assure quality oxytocin and particularly methylergometrine are needed in study districts in both states.
Fatoyinbo, Henry O; McDonnell, Martin C; Hughes, Michael P
2014-07-01
Detection of pathogens from environmental samples is often hampered by sensors interacting with environmental particles such as soot, pollen, or environmental dust such as soil or clay. These particles may be of similar size to the target bacterium, preventing removal by filtration, but may non-specifically bind to sensor surfaces, fouling them and causing artefactual results. In this paper, we report the selective manipulation of soil particles using an AC electrokinetic microfluidic system. Four heterogeneous soil samples (smectic clay, kaolinitic clay, peaty loam, and sandy loam) were characterised using dielectrophoresis to identify the electrical difference to a target organism. A flow-cell device was then constructed to evaluate dielectrophoretic separation of bacteria and clay in a continous flow through mode. The average separation efficiency of the system across all soil types was found to be 68.7% with a maximal separation efficiency for kaolinitic clay at 87.6%. This represents the first attempt to separate soil particles from bacteria using dielectrophoresis and indicate that the technique shows significant promise; with appropriate system optimisation, we believe that this preliminary study represents an opportunity to develop a simple yet highly effective sample processing system.
Highest Resolution Image of Dust and Sand Yet Acquired on Mars
NASA Technical Reports Server (NTRS)
2008-01-01
[figure removed for brevity, see original site] [figure removed for brevity, see original site] [figure removed for brevity, see original site] Click on image for Figure 1Click on image for Figure 2Click on image for Figure 3 This mosaic of four side-by-side microscope images (one a color composite) was acquired by the Optical Microscope, a part of the Microscopy, Electrochemistry, and Conductivity Analyzer (MECA) instrument suite on NASA's Phoenix Mars Lander. Taken on the ninth Martian day of the mission, or Sol 9 (June 3, 2008), the image shows a 3 millimeter (0.12 inch) diameter silicone target after it has been exposed to dust kicked up by the landing. It is the highest resolution image of dust and sand ever acquired on Mars. The silicone substrate provides a sticky surface for holding the particles to be examined by the microscope. Martian Particles on Microscope's Silicone Substrate In figure 1, the particles are on a silcone substrate target 3 millimeters (0.12 inch) in diameter, which provides a sticky surface for holding the particles while the microscope images them. Blow-ups of four of the larger particles are shown in the center. These particles range in size from about 30 microns to 150 microns (from about one one-thousandth of an inch to six one-thousandths of an inch). Possible Nature of Particles Viewed by Mars Lander's Optical Microscope In figure 2, the color composite on the right was acquired to examine dust that had fallen onto an exposed surface. The translucent particle highlighted at bottom center is of comparable size to white particles in a Martian soil sample (upper pictures) seen two sols earlier inside the scoop of Phoenix's Robotic Arm as imaged by the lander's Robotic Arm Camera. The white particles may be examples of the abundant salts that have been found in the Martian soil by previous missions. Further investigations will be needed to determine the white material's composition and whether translucent particles like the one in this microscopic image are found in Martian soil samples. Scale of Phoenix Optical Microscope Images This set of pictures in figure 3 gives context for the size of individual images from the Optical Microscope on NASA's Mars Phoenix Lander. The picture in the upper left was taken on Mars by the Surface Stereo Imager on Phoenix. It shows a portion of the microscope's sample stage exposed to accept a sample. In this case, the sample was of dust kicked up by the spacecraft thrusters during landers. Later samples will include soil delivered by the Robotic Arm. The other pictures were taken on Earth. They show close-ups of circular substrates on which the microscopic samples rest when the microscope images them. Each circular substrate target is 3 millimeters (about one-tenth of an inch) in diameter. Each image taken by the microscope covers and area 2 millimeters by 1 millimeter (0.08 inch by 0.04 inch), the size of a large grain of sand. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.Abdelaziz, Mohamed; Sherif, Lotfy; ElKhiary, Mostafa; Nair, Sanjeeta; Shalaby, Shahinaz; Mohamed, Sara; Eziba, Noura; El-Lakany, Mohamed; Curiel, David; Ismail, Nahed; Diamond, Michael P.; Al-Hendy, Ayman
2016-01-01
Background: Gene therapy is a potentially effective non-surgical approach for the treatment of uterine leiomyoma. We demonstrated that targeted adenovirus vector, Ad-SSTR-RGD-TK/GCV, was highly effective in selectively inducing apoptosis and inhibiting proliferation of human leiomyoma cells in vitro while sparing normal myometrial cells. Study design: An in-vivo study, to compare efficacy and safety of modified adenovirus vector Ad-SSTR-RGD-TK/GCV versus untargeted vector for treatment of leiomyoma. Materials and methods: Female nude mice were implanted with rat leiomyoma cells subcutaneously. Then mice were randomized into three groups. Group 1 received Ad-LacZ (marker gene), Group 2 received untargeted Ad-TK, and Group 3 received the targeted Ad-SSTR-RGD-TK. Tumors were measured weekly for 4 weeks. Then mice were sacrificed and tissue samples were collected. Evaluation of markers of apoptosis, proliferation, extracellular matrix, and angiogenesis was performed using Western Blot & Immunohistochemistry. Statistical analysis was done using ANOVA. Dissemination of adenovirus was assessed by PCR. Results: In comparison with the untargeted vector, the targeted adenoviral vector significantly shrank leiomyoma size (P < 0.05), reduced expression of proliferation marker (PCNA) (P < 0.05), induced expression of apoptotic protein, c-PARP-1, (P < 0.05) and inhibited expression of extracellular matrix-related genes (TGF beta 3) and angiogenesis-related genes (VEGF & IGF-1) (P < 0.01). There were no detectable adenovirus in tested tissues other than leiomyoma lesions with both targeted and untargeted adenovirus. Conclusion: Targeted adenovirus, effectively reduces tumor size in leiomyoma without dissemination to other organs. Further evaluation of this localized targeted strategy for gene therapy is needed in appropriate preclinical humanoid animal models in preparation for a future pilot human trial. PMID:26884457
Abdelaziz, Mohamed; Sherif, Lotfy; ElKhiary, Mostafa; Nair, Sanjeeta; Shalaby, Shahinaz; Mohamed, Sara; Eziba, Noura; El-Lakany, Mohamed; Curiel, David; Ismail, Nahed; Diamond, Michael P; Al-Hendy, Ayman
2016-04-01
Gene therapy is a potentially effective non-surgical approach for the treatment of uterine leiomyoma. We demonstrated that targeted adenovirus vector, Ad-SSTR-RGD-TK/GCV, was highly effective in selectively inducing apoptosis and inhibiting proliferation of human leiomyoma cells in vitro while sparing normal myometrial cells. An in-vivo study, to compare efficacy and safety of modified adenovirus vector Ad-SSTR-RGD-TK/GCV versus untargeted vector for treatment of leiomyoma. Female nude mice were implanted with rat leiomyoma cells subcutaneously. Then mice were randomized into three groups. Group 1 received Ad-LacZ (marker gene), Group 2 received untargeted Ad-TK, and Group 3 received the targeted Ad-SSTR-RGD-TK. Tumors were measured weekly for 4 weeks. Then mice were sacrificed and tissue samples were collected. Evaluation of markers of apoptosis, proliferation, extracellular matrix, and angiogenesis was performed using Western Blot & Immunohistochemistry. Statistical analysis was done using ANOVA. Dissemination of adenovirus was assessed by PCR. In comparison with the untargeted vector, the targeted adenoviral vector significantly shrank leiomyoma size (P < 0.05), reduced expression of proliferation marker (PCNA) (P < 0.05), induced expression of apoptotic protein, c-PARP-1, (P < 0.05) and inhibited expression of extracellular matrix-related genes (TGF beta 3) and angiogenesis-related genes (VEGF & IGF-1) (P < 0.01). There were no detectable adenovirus in tested tissues other than leiomyoma lesions with both targeted and untargeted adenovirus. Targeted adenovirus, effectively reduces tumor size in leiomyoma without dissemination to other organs. Further evaluation of this localized targeted strategy for gene therapy is needed in appropriate preclinical humanoid animal models in preparation for a future pilot human trial. © The Author(s) 2016.
Holcombe, Alex O; Chen, Wei-Ying
2013-01-09
Overall performance when tracking moving targets is known to be poorer for larger numbers of targets, but the specific effect on tracking's temporal resolution has never been investigated. We document a broad range of display parameters for which visual tracking is limited by temporal frequency (the interval between when a target is at each location and a distracter moves in and replaces it) rather than by object speed. We tested tracking of one, two, and three moving targets while the eyes remained fixed. Variation of the number of distracters and their speed revealed both speed limits and temporal frequency limits on tracking. The temporal frequency limit fell from 7 Hz with one target to 4 Hz with two targets and 2.6 Hz with three targets. The large size of this performance decrease implies that in the two-target condition participants would have done better by tracking only one of the two targets and ignoring the other. These effects are predicted by serial models involving a single tracking focus that must switch among the targets, sampling the position of only one target at a time. If parallel processing theories are to explain why dividing the tracking resource reduces temporal resolution so markedly, supplemental assumptions will be required.
Stehman, S.V.; Wickham, J.D.; Wade, T.G.; Smith, J.H.
2008-01-01
The database design and diverse application of NLCD 2001 pose significant challenges for accuracy assessment because numerous objectives are of interest, including accuracy of land-cover, percent urban imperviousness, percent tree canopy, land-cover composition, and net change. A multi-support approach is needed because these objectives require spatial units of different sizes for reference data collection and analysis. Determining a sampling design that meets the full suite of desirable objectives for the NLCD 2001 accuracy assessment requires reconciling potentially conflicting design features that arise from targeting the different objectives. Multi-stage cluster sampling provides the general structure to achieve a multi-support assessment, and the flexibility to target different objectives at different stages of the design. We describe the implementation of two-stage cluster sampling for the initial phase of the NLCD 2001 assessment, and identify gaps in existing knowledge where research is needed to allow full implementation of a multi-objective, multi-support assessment. ?? 2008 American Society for Photogrammetry and Remote Sensing.
Using spatial uncertainty to manipulate the size of the attention focus.
Huang, Dan; Xue, Linyan; Wang, Xin; Chen, Yao
2016-09-01
Preferentially processing behaviorally relevant information is vital for primate survival. In visuospatial attention studies, manipulating the spatial extent of attention focus is an important question. Although many studies have claimed to successfully adjust attention field size by either varying the uncertainty about the target location (spatial uncertainty) or adjusting the size of the cue orienting the attention focus, no systematic studies have assessed and compared the effectiveness of these methods. We used a multiple cue paradigm with 2.5° and 7.5° rings centered around a target position to measure the cue size effect, while the spatial uncertainty levels were manipulated by changing the number of cueing positions. We found that spatial uncertainty had a significant impact on reaction time during target detection, while the cue size effect was less robust. We also carefully varied the spatial scope of potential target locations within a small or large region and found that this amount of variation in spatial uncertainty can also significantly influence target detection speed. Our results indicate that adjusting spatial uncertainty is more effective than varying cue size when manipulating attention field size.
Investigations of internal noise levels for different target sizes, contrasts, and noise structures
NASA Astrophysics Data System (ADS)
Han, Minah; Choi, Shinkook; Baek, Jongduk
2014-03-01
To describe internal noise levels for different target sizes, contrasts, and noise structures, Gaussian targets with four different sizes (i.e., standard deviation of 2,4,6 and 8) and three different noise structures(i.e., white, low-pass, and highpass) were generated. The generated noise images were scaled to have standard deviation of 0.15. For each noise type, target contrasts were adjusted to have the same detectability based on NPW, and the detectability of CHO was calculated accordingly. For human observer study, 3 trained observers performed 2AFC detection tasks, and correction rate, Pc, was calculated for each task. By adding proper internal noise level to numerical observer (i.e., NPW and CHO), detectability of human observer was matched with that of numerical observers. Even though target contrasts were adjusted to have the same detectability of NPW observer, detectability of human observer decreases as the target size increases. The internal noise level varies for different target sizes, contrasts, and noise structures, demonstrating different internal noise levels should be considered in numerical observer to predict the detection performance of human observer.
Blackwell, Brett R; Wooten, Kimberly J; Buser, Michael D; Johnson, Bradley J; Cobb, George P; Smith, Philip N
2015-07-21
Studies of steroid growth promoters from beef cattle feedyards have previously focused on effluent or surface runoff as the primary route of transport from animal feeding operations. There is potential for steroid transport via fugitive airborne particulate matter (PM) from cattle feedyards; therefore, the objective of this study was to characterize the occurrence and concentration of steroid growth promoters in PM from feedyards. Air sampling was conducted at commercial feedyards (n = 5) across the Southern Great Plains from 2010 to 2012. Total suspended particulates (TSP), PM10, and PM2.5 were collected for particle size analysis and steroid growth promoter analysis. Particle size distributions were generated from TSP samples only, while steroid analysis was conducted on extracts of PM samples using liquid chromatography mass spectrometry. Of seven targeted steroids, 17α-estradiol and estrone were the most commonly detected, identified in over 94% of samples at median concentrations of 20.6 and 10.8 ng/g, respectively. Melengestrol acetate and 17α-trenbolone were detected in 31% and 39% of all PM samples at median concentrations of 1.3 and 1.9 ng/g, respectively. Results demonstrate PM is a viable route of steroid transportation and may be a significant contributor to environmental steroid hormone loading from cattle feedyards.
Bose, Nikhil; Carlberg, Katie; Sensabaugh, George; Erlich, Henry; Calloway, Cassandra
2018-05-01
DNA from biological forensic samples can be highly fragmented and present in limited quantity. When DNA is highly fragmented, conventional PCR based Short Tandem Repeat (STR) analysis may fail as primer binding sites may not be present on a single template molecule. Single Nucleotide Polymorphisms (SNPs) can serve as an alternative type of genetic marker for analysis of degraded samples because the targeted variation is a single base. However, conventional PCR based SNP analysis methods still require intact primer binding sites for target amplification. Recently, probe capture methods for targeted enrichment have shown success in recovering degraded DNA as well as DNA from ancient bone samples using next-generation sequencing (NGS) technologies. The goal of this study was to design and test a probe capture assay targeting forensically relevant nuclear SNP markers for clonal and massively parallel sequencing (MPS) of degraded and limited DNA samples as well as mixtures. A set of 411 polymorphic markers totaling 451 nuclear SNPs (375 SNPs and 36 microhaplotype markers) was selected for the custom probe capture panel. The SNP markers were selected for a broad range of forensic applications including human individual identification, kinship, and lineage analysis as well as for mixture analysis. Performance of the custom SNP probe capture NGS assay was characterized by analyzing read depth and heterozygote allele balance across 15 samples at 25 ng input DNA. Performance thresholds were established based on read depth ≥500X and heterozygote allele balance within ±10% deviation from 50:50, which was observed for 426 out of 451 SNPs. These 426 SNPs were analyzed in size selected samples (at ≤75 bp, ≤100 bp, ≤150 bp, ≤200 bp, and ≤250 bp) as well as mock degraded samples fragmented to an average of 150 bp. Samples selected for ≤75 bp exhibited 99-100% reportable SNPs across varied DNA amounts and as low as 0.5 ng. Mock degraded samples at 1 ng and 10 ng exhibited >90% reportable SNPs. Finally, two-person male-male mixtures were tested at 10 ng in contributor varying ratios. Overall, 85-100% of alleles unique to the minor contributor were observed at all mixture ratios. Results from these studies using the SNP probe capture NGS system demonstrates proof of concept for application to forensically relevant degraded and mixed DNA samples. Copyright © 2018 Elsevier B.V. All rights reserved.
Morard, Raphaël; Garet-Delmas, Marie-José; Mahé, Frédéric; Romac, Sarah; Poulain, Julie; Kucera, Michal; de Vargas, Colomban
2018-02-07
Since the advent of DNA metabarcoding surveys, the planktonic realm is considered a treasure trove of diversity, inhabited by a small number of abundant taxa, and a hugely diverse and taxonomically uncharacterized consortium of rare species. Here we assess if the apparent underestimation of plankton diversity applies universally. We target planktonic foraminifera, a group of protists whose known morphological diversity is limited, taxonomically resolved and linked to ribosomal DNA barcodes. We generated a pyrosequencing dataset of ~100,000 partial 18S rRNA foraminiferal sequences from 32 size fractioned photic-zone plankton samples collected at 8 stations in the Indian and Atlantic Oceans during the Tara Oceans expedition (2009-2012). We identified 69 genetic types belonging to 41 morphotaxa in our metabarcoding dataset. The diversity saturated at local and regional scale as well as in the three size fractions and the two depths sampled indicating that the diversity of foraminifera is modest and finite. The large majority of the newly discovered lineages occur in the small size fraction, neglected by classical taxonomy. These unknown lineages dominate the bulk [>0.8 µm] size fraction, implying that a considerable part of the planktonic foraminifera community biomass has its origin in unknown lineages.
Drogue tracking using 3D flash lidar for autonomous aerial refueling
NASA Astrophysics Data System (ADS)
Chen, Chao-I.; Stettner, Roger
2011-06-01
Autonomous aerial refueling (AAR) is an important capability for an unmanned aerial vehicle (UAV) to increase its flying range and endurance without increasing its size. This paper presents a novel tracking method that utilizes both 2D intensity and 3D point-cloud data acquired with a 3D Flash LIDAR sensor to establish relative position and orientation between the receiver vehicle and drogue during an aerial refueling process. Unlike classic, vision-based sensors, a 3D Flash LIDAR sensor can provide 3D point-cloud data in real time without motion blur, in the day or night, and is capable of imaging through fog and clouds. The proposed method segments out the drogue through 2D analysis and estimates the center of the drogue from 3D point-cloud data for flight trajectory determination. A level-set front propagation routine is first employed to identify the target of interest and establish its silhouette information. Sufficient domain knowledge, such as the size of the drogue and the expected operable distance, is integrated into our approach to quickly eliminate unlikely target candidates. A statistical analysis along with a random sample consensus (RANSAC) is performed on the target to reduce noise and estimate the center of the drogue after all 3D points on the drogue are identified. The estimated center and drogue silhouette serve as the seed points to efficiently locate the target in the next frame.
Impact of Target Distance, Target Size, and Visual Acuity on the Video Head Impulse Test.
Judge, Paul D; Rodriguez, Amanda I; Barin, Kamran; Janky, Kristen L
2018-05-01
The video head impulse test (vHIT) assesses the vestibulo-ocular reflex. Few have evaluated whether environmental factors or visual acuity influence the vHIT. The purpose of this study was to evaluate the influence of target distance, target size, and visual acuity on vHIT outcomes. Thirty-eight normal controls and 8 subjects with vestibular loss (VL) participated. vHIT was completed at 3 distances and with 3 target sizes. Normal controls were subdivided on the basis of visual acuity. Corrective saccade frequency, corrective saccade amplitude, and gain were tabulated. In the normal control group, there were no significant effects of target size or visual acuity for any vHIT outcome parameters; however, gain increased as target distance decreased. The VL group demonstrated higher corrective saccade frequency and amplitude and lower gain as compared with controls. In conclusion, decreasing target distance increases gain for normal controls but not subjects with VL. Preliminarily, visual acuity does not affect vHIT outcomes.
Cho, Min-Chul; Kim, So Young; Jeong, Tae-Dong; Lee, Woochang; Chun, Sail; Min, Won-Ki
2014-11-01
Verification of new lot reagent's suitability is necessary to ensure that results for patients' samples are consistent before and after reagent lot changes. A typical procedure is to measure results of some patients' samples along with quality control (QC) materials. In this study, the results of patients' samples and QC materials in reagent lot changes were analysed. In addition, the opinion regarding QC target range adjustment along with reagent lot changes was proposed. Patients' sample and QC material results of 360 reagent lot change events involving 61 analytes and eight instrument platforms were analysed. The between-lot differences for the patients' samples (ΔP) and the QC materials (ΔQC) were tested by Mann-Whitney U tests. The size of the between-lot differences in the QC data was calculated as multiples of standard deviation (SD). The ΔP and ΔQC values only differed significantly in 7.8% of the reagent lot change events. This frequency was not affected by the assay principle or the QC material source. One SD was proposed for the cutoff for maintaining pre-existing target range after reagent lot change. While non-commutable QC material results were infrequent in the present study, our data confirmed that QC materials have limited usefulness when assessing new reagent lots. Also a 1 SD standard for establishing a new QC target range after reagent lot change event was proposed. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Kim, Eun Hye; Lee, Hwan Young; Yang, In Seok; Jung, Sang-Eun; Yang, Woo Ick; Shin, Kyoung-Jin
2016-05-01
The next-generation sequencing (NGS) method has been utilized to analyze short tandem repeat (STR) markers, which are routinely used for human identification purposes in the forensic field. Some researchers have demonstrated the successful application of the NGS system to STR typing, suggesting that NGS technology may be an alternative or additional method to overcome limitations of capillary electrophoresis (CE)-based STR profiling. However, there has been no available multiplex PCR system that is optimized for NGS analysis of forensic STR markers. Thus, we constructed a multiplex PCR system for the NGS analysis of 18 markers (13CODIS STRs, D2S1338, D19S433, Penta D, Penta E and amelogenin) by designing amplicons in the size range of 77-210 base pairs. Then, PCR products were generated from two single-sources, mixed samples and artificially degraded DNA samples using a multiplex PCR system, and were prepared for sequencing on the MiSeq system through construction of a subsequent barcoded library. By performing NGS and analyzing the data, we confirmed that the resultant STR genotypes were consistent with those of CE-based typing. Moreover, sequence variations were detected in targeted STR regions. Through the use of small-sized amplicons, the developed multiplex PCR system enables researchers to obtain successful STR profiles even from artificially degraded DNA as well as STR loci which are analyzed with large-sized amplicons in the CE-based commercial kits. In addition, successful profiles can be obtained from mixtures up to a 1:19 ratio. Consequently, the developed multiplex PCR system, which produces small size amplicons, can be successfully applied to STR NGS analysis of forensic casework samples such as mixtures and degraded DNA samples. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A review of reporting of participant recruitment and retention in RCTs in six major journals
Toerien, Merran; Brookes, Sara T; Metcalfe, Chris; de Salis, Isabel; Tomlin, Zelda; Peters, Tim J; Sterne, Jonathan; Donovan, Jenny L
2009-01-01
Background Poor recruitment and retention of participants in randomised controlled trials (RCTs) is problematic but common. Clear and detailed reporting of participant flow is essential to assess the generalisability and comparability of RCTs. Despite improved reporting since the implementation of the CONSORT statement, important problems remain. This paper aims: (i) to update and extend previous reviews evaluating reporting of participant recruitment and retention in RCTs; (ii) to quantify the level of participation throughout RCTs. Methods We reviewed all reports of RCTs of health care interventions and/or processes with individual randomisation, published July–December 2004 in six major journals. Short, secondary or interim reports, and Phase I/II trials were excluded. Data recorded were: general RCT details; inclusion of flow diagram; participant flow throughout trial; reasons for non-participation/withdrawal; target sample sizes. Results 133 reports were reviewed. Overall, 79% included a flow diagram, but over a third were incomplete. The majority reported the flow of participants at each stage of the trial after randomisation. However, 40% failed to report the numbers assessed for eligibility. Percentages of participants retained at each stage were high: for example, 90% of eligible individuals were randomised, and 93% of those randomised were outcome assessed. On average, trials met their sample size targets. However, there were some substantial shortfalls: for example 21% of trials reporting a sample size calculation failed to achieve adequate numbers at randomisation, and 48% at outcome assessment. Reporting of losses to follow up was variable and difficult to interpret. Conclusion The majority of RCTs reported the flow of participants well after randomisation, although only two-thirds included a complete flow chart and there was great variability over the definition of "lost to follow up". Reporting of participant eligibility was poor, making assessments of recruitment practice and external validity difficult. Reporting of participant flow throughout RCTs could be improved by small changes to the CONSORT chart. PMID:19591685
A review of reporting of participant recruitment and retention in RCTs in six major journals.
Toerien, Merran; Brookes, Sara T; Metcalfe, Chris; de Salis, Isabel; Tomlin, Zelda; Peters, Tim J; Sterne, Jonathan; Donovan, Jenny L
2009-07-10
Poor recruitment and retention of participants in randomised controlled trials (RCTs) is problematic but common. Clear and detailed reporting of participant flow is essential to assess the generalisability and comparability of RCTs. Despite improved reporting since the implementation of the CONSORT statement, important problems remain. This paper aims: (i) to update and extend previous reviews evaluating reporting of participant recruitment and retention in RCTs; (ii) to quantify the level of participation throughout RCTs. We reviewed all reports of RCTs of health care interventions and/or processes with individual randomisation, published July-December 2004 in six major journals. Short, secondary or interim reports, and Phase I/II trials were excluded. Data recorded were: general RCT details; inclusion of flow diagram; participant flow throughout trial; reasons for non-participation/withdrawal; target sample sizes. 133 reports were reviewed. Overall, 79% included a flow diagram, but over a third were incomplete. The majority reported the flow of participants at each stage of the trial after randomisation. However, 40% failed to report the numbers assessed for eligibility. Percentages of participants retained at each stage were high: for example, 90% of eligible individuals were randomised, and 93% of those randomised were outcome assessed. On average, trials met their sample size targets. However, there were some substantial shortfalls: for example 21% of trials reporting a sample size calculation failed to achieve adequate numbers at randomisation, and 48% at outcome assessment. Reporting of losses to follow up was variable and difficult to interpret. The majority of RCTs reported the flow of participants well after randomisation, although only two-thirds included a complete flow chart and there was great variability over the definition of "lost to follow up". Reporting of participant eligibility was poor, making assessments of recruitment practice and external validity difficult. Reporting of participant flow throughout RCTs could be improved by small changes to the CONSORT chart.
Li, Kai; Chen, Bei; Zhou, Yuxun; Huang, Rui; Liang, Yinming; Wang, Qinxi; Xiao, Zhenxian; Xiao, Junhua
2009-03-01
A new method, based on ligase detection reaction (LDR), was developed for quantitative detection of multiplex PCR amplicons of 16S rRNA genes present in complex mixtures (specifically feces). LDR has been widely used in single nucleotide polymorphism (SNP) assay but never applied for quantification of multiplex PCR products. This method employs one pair of DNA probes, one of which is labeled with fluorescence for signal capture, complementary to the target sequence. For multiple target sequence analysis, probes were modified with different lengths of polyT at the 5' end and 3' end. Using a DNA sequencer, these ligated probes were separated and identified by size and dye color. Then, relative abundance of target DNA were normalized and quantified based on the fluorescence intensities and exterior size standards. 16S rRNA gene of three preponderant bacteria groups in human feces: Clostridium coccoides, Bacteroides and related genera, and Clostridium leptum group, were amplified and cloned into plasmid DNA so as to make standard curves. After PCR-LDR analysis, a strong linear relationship was found between the florescence intensity and the diluted plasmid DNA concentrations. Furthermore, based on this method, 100 human fecal samples were quantified for the relative abundance of the three bacterial groups. Relative abundance of C. coccoides was significantly higher in elderly people in comparison with young adults, without gender differences. Relative abundance of Bacteroides and related genera and C. leptum group were significantly higher in young and middle aged than in the elderly. Regarding the whole set of sample, C. coccoides showed the highest relative abundance, followed by decreasing groups Bacteroides and related genera, and C. leptum. These results imply that PCR-LDR can be feasible and flexible applied to large scale epidemiological studies.
VizieR Online Data Catalog: SAMI Galaxy Survey: gas streaming (Cecil+, 2016)
NASA Astrophysics Data System (ADS)
Cecil, G.; Fogarty, L. M. R.; Richards, S.; Bland-Hawthorn, J.; Lange, R.; Moffett, A.; Catinella, B.; Cortese, L.; Ho, I.-T.; Taylor, E. N.; Bryant, J. J.; Allen, J. T.; Sweet, S. M.; Croom, S. M.; Driver, S. P.; Goodwin, M.; Kelvin, L.; Green, A. W.; Konstantopoulos, I. S.; Owers, M. S.; Lawrence, J. S.; Lorente, N. P. F.
2016-08-01
From the first ~830 targets observed in the SGS, we selected 344 rotationally supported galaxies having enough gas to map their CSC. We rejected 8 whose inclination angle to us is too small (i<20°) to be established reliably by photometry, and those very strongly barred or in obvious interactions. Finally, we rejected those whose CSC would be smeared excessively by our PSF (Sect. 2.3.1) because of large inclination (i>71°), compact size, or observed in atrocious conditions, leaving 163 SGS GAMA survey sub-sample and 15 "cluster" sub-sample galaxies with discs. (3 data files).
Physical characterization of whole and skim dried milk powders.
Pugliese, Alessandro; Cabassi, Giovanni; Chiavaro, Emma; Paciulli, Maria; Carini, Eleonora; Mucchetti, Germano
2017-10-01
The lack of updated knowledge about the physical properties of milk powders aimed us to evaluate selected physical properties (water activity, particle size, density, flowability, solubility and colour) of eleven skim and whole milk powders produced in Europe. These physical properties are crucial both for the management of milk powder during the final steps of the drying process, and for their use as food ingredients. In general, except for the values of water activity, the physical properties of skim and whole milk powders are very different. Particle sizes of the spray-dried skim milk powders, measured as volume and surface mean diameter were significantly lower than that of the whole milk powders, while the roller dried sample showed the largest particle size. For all the samples the size distribution was quite narrow, with a span value less than 2. The loose density of skim milk powders was significantly higher than whole milk powders (541.36 vs 449.75 kg/m 3 ). Flowability, measured by Hausner ratio and Carr's index indicators, ranged from passable to poor when evaluated according to pharmaceutical criteria. The insolubility index of the spray-dried skim and whole milk powders, measured as weight of the sediment (from 0.5 to 34.8 mg), allowed a good discrimination of the samples. Colour analysis underlined the relevant contribution of fat content and particle size, resulted in higher lightness ( L *) for skim milk powder than whole milk powder, which, on the other hand, showed higher yellowness ( b *) and lower greenness (- a *). In conclusion a detailed knowledge of functional properties of milk powders may allow the dairy to tailor the products to the user and help the food processor to perform a targeted choice according to the intended use.
Using e-mail recruitment and an online questionnaire to establish effect size: A worked example.
Kirkby, Helen M; Wilson, Sue; Calvert, Melanie; Draper, Heather
2011-06-09
Sample size calculations require effect size estimations. Sometimes, effect size estimations and standard deviation may not be readily available, particularly if efficacy is unknown because the intervention is new or developing, or the trial targets a new population. In such cases, one way to estimate the effect size is to gather expert opinion. This paper reports the use of a simple strategy to gather expert opinion to estimate a suitable effect size to use in a sample size calculation. Researchers involved in the design and analysis of clinical trials were identified at the University of Birmingham and via the MRC Hubs for Trials Methodology Research. An email invited them to participate.An online questionnaire was developed using the free online tool 'Survey Monkey©'. The questionnaire described an intervention, an electronic participant information sheet (e-PIS), which may increase recruitment rates to a trial. Respondents were asked how much they would need to see recruitment rates increased by, based on 90%. 70%, 50% and 30% baseline rates, (in a hypothetical study) before they would consider using an e-PIS in their research.Analyses comprised simple descriptive statistics. The invitation to participate was sent to 122 people; 7 responded to say they were not involved in trial design and could not complete the questionnaire, 64 attempted it, 26 failed to complete it. Thirty-eight people completed the questionnaire and were included in the analysis (response rate 33%; 38/115). Of those who completed the questionnaire 44.7% (17/38) were at the academic grade of research fellow 26.3% (10/38) senior research fellow, and 28.9% (11/38) professor. Dependent upon the baseline recruitment rates presented in the questionnaire, participants wanted recruitment rate to increase from 6.9% to 28.9% before they would consider using the intervention. This paper has shown that in situations where effect size estimations cannot be collected from previous research, opinions from researchers and trialists can be quickly and easily collected by conducting a simple study using email recruitment and an online questionnaire. The results collected from the survey were successfully used in sample size calculations for a PhD research study protocol.
Dahlberg, Suzanne E; Shapiro, Geoffrey I; Clark, Jeffrey W; Johnson, Bruce E
2014-07-01
Phase I trials have traditionally been designed to assess toxicity and establish phase II doses with dose-finding studies and expansion cohorts but are frequently exceeding the traditional sample size to further assess endpoints in specific patient subsets. The scientific objectives of phase I expansion cohorts and their evolving role in the current era of targeted therapies have yet to be systematically examined. Adult therapeutic phase I trials opened within Dana-Farber/Harvard Cancer Center (DF/HCC) from 1988 to 2012 were identified for sample size details. Statistical designs and study objectives of those submitted in 2011 were reviewed for expansion cohort details. Five hundred twenty-two adult therapeutic phase I trials were identified during the 25 years. The average sample size of a phase I study has increased from 33.8 patients to 73.1 patients over that time. The proportion of trials with planned enrollment of 50 or fewer patients dropped from 93.0% during the time period 1988 to 1992 to 46.0% between 2008 and 2012; at the same time, the proportion of trials enrolling 51 to 100 patients and more than 100 patients increased from 5.3% and 1.8%, respectively, to 40.5% and 13.5% (χ(2) test, two-sided P < .001). Sixteen of the 60 trials (26.7%) in 2011 enrolled patients to three or more sub-cohorts in the expansion phase. Sixty percent of studies provided no statistical justification of the sample size, although 91.7% of trials stated response as an objective. Our data suggest that phase I studies have dramatically changed in size and scientific scope within the last decade. Additional studies addressing the implications of this trend on research processes, ethical concerns, and resource burden are needed. © The Author 2014. Published by Oxford University Press. All rights reserved.
Ghadyani, Hamid R.; Bastien, Adam D.; Lutz, Nicholas N.; Hepel, Jaroslaw T.
2015-01-01
Purpose Noninvasive image-guided breast brachytherapy delivers conformal HDR 192Ir brachytherapy treatments with the breast compressed, and treated in the cranial-caudal and medial-lateral directions. This technique subjects breast tissue to extreme deformations not observed for other disease sites. Given that, commercially-available software for deformable image registration cannot accurately co-register image sets obtained in these two states, a finite element analysis based on a biomechanical model was developed to deform dose distributions for each compression circumstance for dose summation. Material and methods The model assumed the breast was under planar stress with values of 30 kPa for Young's modulus and 0.3 for Poisson's ratio. Dose distributions from round and skin-dose optimized applicators in cranial-caudal and medial-lateral compressions were deformed using 0.1 cm planar resolution. Dose distributions, skin doses, and dose-volume histograms were generated. Results were examined as a function of breast thickness, applicator size, target size, and offset distance from the center. Results Over the range of examined thicknesses, target size increased several millimeters as compression thickness decreased. This trend increased with increasing offset distances. Applicator size minimally affected target coverage, until applicator size was less than the compressed target size. In all cases, with an applicator larger or equal to the compressed target size, > 90% of the target covered by > 90% of the prescription dose. In all cases, dose coverage became less uniform as offset distance increased and average dose increased. This effect was more pronounced for smaller target–applicator combinations. Conclusions The model exhibited skin dose trends that matched MC-generated benchmarking results within 2% and clinical observations over a similar range of breast thicknesses and target sizes. The model provided quantitative insight on dosimetric treatment variables over a range of clinical circumstances. These findings highlight the need for careful target localization and accurate identification of compression thickness and target offset. PMID:25829938
High Prevalence of Anaplasma spp. in Small Ruminants in Morocco.
Ait Lbacha, H; Alali, S; Zouagui, Z; El Mamoun, L; Rhalem, A; Petit, E; Haddad, N; Gandoin, C; Boulouis, H-J; Maillard, R
2017-02-01
The prevalence of infection by Anaplasma spp. (including Anaplasma phagocytophilum) was determined using blood smear microscopy and PCR through screening of small ruminant blood samples collected from seven regions of Morocco. Co-infections of Anaplasma spp., Babesia spp, Theileria spp. and Mycoplasma spp. were investigated and risk factors for Anaplasma spp. infection assessed. A total of 422 small ruminant blood samples were randomly collected from 70 flocks. Individual animal (breed, age, tick burden and previous treatment) and flock data (GPS coordinate of farm, size of flock and livestock production system) were collected. Upon examination of blood smears, 375 blood samples (88.9%) were found to contain Anaplasma-like erythrocytic inclusion bodies. Upon screening with a large spectrum PCR targeting the Anaplasma 16S rRNA region, 303 (71%) samples were found to be positive. All 303 samples screened with the A. phagocytophilum-specific PCR, which targets the msp2 region, were found to be negative. Differences in prevalence were found to be statistically significant with regard to region, altitude, flock size, livestock production system, grazing system, presence of clinical cases and application of tick and tick-borne diseases prophylactic measures. Kappa analysis revealed a poor concordance between microscopy and PCR (k = 0.14). Agreement with PCR is improved by considering microscopy and packed cell volume (PCV) in parallel. The prevalence of double infections was found to be 1.7, 2.5 and 24% for Anaplasma-Babesia, Anaplasma-Mycoplasma and Anaplasma-Theileria, respectively. Co-infection with three or more haemoparasites was found in 1.6% of animals examined. In conclusion, we demonstrate the high burden of anaplasmosis in small ruminants in Morocco and the high prevalence of co-infections of tick-borne diseases. There is an urgent need to improve the control of this neglected group of diseases. © 2015 Blackwell Verlag GmbH.
THE ORIGIN OF ASTEROID 162173 (1999 JU{sub 3})
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campins, Humberto; De Leon, Julia; Morbidelli, Alessandro
Near-Earth asteroid (162173) 1999 JU{sub 3} (henceforth JU{sub 3}) is a potentially hazardous asteroid and the target of the Japanese Aerospace Exploration Agency's Hayabusa-2 sample return mission. JU{sub 3} is also a backup target for two other sample return missions: NASA's OSIRIS-REx and the European Space Agency's Marco Polo-R. We use dynamical information to identify an inner-belt, low-inclination origin through the {nu}{sub 6} resonance, more specifically, the region with 2.15 AU < a < 2.5 AU and i < 8 Degree-Sign . The geometric albedo of JU{sub 3} is 0.07 {+-} 0.01, and this inner-belt region contains four well-defined low-albedomore » asteroid families (Clarissa, Erigone, Polana, and Sulamitis), plus a recently identified background population of low-albedo asteroids outside these families. Only two of these five groups, the background and the Polana family, deliver JU{sub 3}-sized asteroids to the {nu}{sub 6} resonance, and the background delivers significantly more JU{sub 3}-sized asteroids. The available spectral evidence is also diagnostic; the visible and near-infrared spectra of JU{sub 3} indicate it is a C-type asteroid, which is compatible with members of the background, but not with the Polana family because it contains primarily B-type asteroids. Hence, this background population of low-albedo asteroids is the most likely source of JU{sub 3}.« less
NASA Astrophysics Data System (ADS)
Zhang, Dong-Hai; Chen, Yan-Ling; Wang, Guo-Rong; Li, Wang-Dong; Wang, Qing; Yao, Ji-Jie; Zhou, Jian-Guo; Li, Rong; Li, Jun-Sheng; Li, Hui-Ling
2015-01-01
The forward-backward multiplicity and correlations of a target evaporated fragment (black track particle) and target recoiled proton (grey track particle) emitted from 150 A MeV 4He, 290 A MeV 12C, 400 A MeV 12C, 400 A MeV 20Ne and 500 A MeV 56Fe induced different types of nuclear emulsion target interactions are investigated. It is found that the forward and backward averaged multiplicity of a grey, black and heavily ionized track particle increases with the increase of the target size. The averaged multiplicity of a forward black track particle, backward black track particle, and backward grey track particle do not depend on the projectile size and energy, but the averaged multiplicity of a forward grey track particle increases with an increase of projectile size and energy. The backward grey track particle multiplicity distribution follows an exponential decay law and the decay constant decreases with an increase of target size. The backward-forward multiplicity correlations follow linear law which is independent of the projectile size and energy, and the saturation effect is observed in some heavy target data sets.
Pore space connectivity and porosity using CT scans of tropical soils
NASA Astrophysics Data System (ADS)
Previatello da Silva, Livia; de Jong Van Lier, Quirijn
2015-04-01
Microtomography has been used in soil physics for characterization and allows non-destructive analysis with high-resolution, yielding a three-dimensional representation of pore space and fluid distribution. It also allows quantitative characterization of pore space, including pore size distribution, shape, connectivity, porosity, tortuosity, orientation, preferential pathways and is also possible predict the saturated hydraulic conductivity using Darcy's equation and a modified Poiseuille's equation. Connectivity of pore space is an important topological property of soil. Together with porosity and pore-size distribution, it governs transport of water, solutes and gases. In order to quantify and analyze pore space (quantifying connectivity of pores and porosity) of four tropical soils from Brazil with different texture and land use, undisturbed samples were collected in São Paulo State, Brazil, with PVC ring with 7.5 cm in height and diameter of 7.5 cm, depth of 10 - 30 cm from soil surface. Image acquisition was performed with a CT system Nikon XT H 225, with technical specifications of dual reflection-transmission target system including a 225 kV, 225 W high performance Xray source equipped with a reflection target with pot size of 3 μm combined with a nano-focus transmission module with a spot size of 1 μm. The images were acquired at specific energy level for each soil type, according to soil texture, and external copper filters were used in order to allow the attenuation of low frequency X-ray photons and passage of one monoenergetic beam. This step was performed aiming minimize artifacts such as beam hardening that may occur during the attenuation in the material interface with different densities within the same sample. Images were processed and analyzed using ImageJ/Fiji software. Retention curve (tension table and the pressure chamber methods), saturated hydraulic conductivity (constant head permeameter), granulometry, soil density and particle density were also performed in laboratory and results were compared with images analyzes.
Interaction between numbers and size during visual search.
Krause, Florian; Bekkering, Harold; Pratt, Jay; Lindemann, Oliver
2017-05-01
The current study investigates an interaction between numbers and physical size (i.e. size congruity) in visual search. In three experiments, participants had to detect a physically large (or small) target item among physically small (or large) distractors in a search task comprising single-digit numbers. The relative numerical size of the digits was varied, such that the target item was either among the numerically large or small numbers in the search display and the relation between numerical and physical size was either congruent or incongruent. Perceptual differences of the stimuli were controlled by a condition in which participants had to search for a differently coloured target item with the same physical size and by the usage of LCD-style numbers that were matched in visual similarity by shape transformations. The results of all three experiments consistently revealed that detecting a physically large target item is significantly faster when the numerical size of the target item is large as well (congruent), compared to when it is small (incongruent). This novel finding of a size congruity effect in visual search demonstrates an interaction between numerical and physical size in an experimental setting beyond typically used binary comparison tasks, and provides important new evidence for the notion of shared cognitive codes for numbers and sensorimotor magnitudes. Theoretical consequences for recent models on attention, magnitude representation and their interactions are discussed.
Bottom-up guidance in visual search for conjunctions.
Proulx, Michael J
2007-02-01
Understanding the relative role of top-down and bottom-up guidance is crucial for models of visual search. Previous studies have addressed the role of top-down and bottom-up processes in search for a conjunction of features but with inconsistent results. Here, the author used an attentional capture method to address the role of top-down and bottom-up processes in conjunction search. The role of bottom-up processing was assayed by inclusion of an irrelevant-size singleton in a search for a conjunction of color and orientation. One object was uniquely larger on each trial, with chance probability of coinciding with the target; thus, the irrelevant feature of size was not predictive of the target's location. Participants searched more efficiently for the target when it was also the size singleton, and they searched less efficiently for the target when a nontarget was the size singleton. Although a conjunction target cannot be detected on the basis of bottom-up processing alone, participants used search strategies that relied significantly on bottom-up guidance in finding the target, resulting in interference from the irrelevant-size singleton.
Whitmore, Roy W; Chen, Wenlin
2013-12-04
The ability to infer human exposure to substances from drinking water using monitoring data helps determine and/or refine potential risks associated with drinking water consumption. We describe a survey sampling approach and its application to an atrazine groundwater monitoring study to adequately characterize upper exposure centiles and associated confidence intervals with predetermined precision. Study design and data analysis included sampling frame definition, sample stratification, sample size determination, allocation to strata, analysis weights, and weighted population estimates. Sampling frame encompassed 15 840 groundwater community water systems (CWS) in 21 states throughout the U. S. Median, and 95th percentile atrazine concentrations were 0.0022 and 0.024 ppb, respectively, for all CWS. Statistical estimates agreed with historical monitoring results, suggesting that the study design was adequate and robust. This methodology makes no assumptions regarding the occurrence distribution (e.g., lognormality); thus analyses based on the design-induced distribution provide the most robust basis for making inferences from the sample to target population.
Berk, Lotte; van Boxtel, Martin; van Os, Jim
2017-11-01
An increased need exists to examine factors that protect against age-related cognitive decline. There is preliminary evidence that meditation can improve cognitive function. However, most studies are cross-sectional and examine a wide variety of meditation techniques. This review focuses on the standard eight-week mindfulness-based interventions (MBIs) such as mindfulness-based stress reduction (MBSR) and mindfulness-based cognitive therapy (MBCT). We searched the PsychINFO, CINAHL, Web of Science, COCHRANE, and PubMed databases to identify original studies investigating the effects of MBI on cognition in older adults. Six reports were included in the review of which three were randomized controlled trials. Studies reported preliminary positive effects on memory, executive function and processing speed. However, most reports had a high risk of bias and sample sizes were small. The only study with low risk of bias, large sample size and active control group reported no significant findings. We conclude that eight-week MBI for older adults are feasible, but results on cognitive improvement are inconclusive due a limited number of studies, small sample sizes, and a high risk of bias. Rather than a narrow focus on cognitive training per se, future research may productively shift to investigate MBI as a tool to alleviate suffering in older adults, and to prevent cognitive problems in later life already in younger target populations.
Moyé, Lemuel A; Lai, Dejian; Jing, Kaiyan; Baraniuk, Mary Sarah; Kwak, Minjung; Penn, Marc S; Wu, Colon O
2011-01-01
The assumptions that anchor large clinical trials are rooted in smaller, Phase II studies. In addition to specifying the target population, intervention delivery, and patient follow-up duration, physician-scientists who design these Phase II studies must select the appropriate response variables (endpoints). However, endpoint measures can be problematic. If the endpoint assesses the change in a continuous measure over time, then the occurrence of an intervening significant clinical event (SCE), such as death, can preclude the follow-up measurement. Finally, the ideal continuous endpoint measurement may be contraindicated in a fraction of the study patients, a change that requires a less precise substitution in this subset of participants.A score function that is based on the U-statistic can address these issues of 1) intercurrent SCE's and 2) response variable ascertainments that use different measurements of different precision. The scoring statistic is easy to apply, clinically relevant, and provides flexibility for the investigators' prospective design decisions. Sample size and power formulations for this statistic are provided as functions of clinical event rates and effect size estimates that are easy for investigators to identify and discuss. Examples are provided from current cardiovascular cell therapy research.
NASA Technical Reports Server (NTRS)
Fechtig, H.; Gentner, W.; Hartung, J. B.; Nagel, K.; Neukum, G.; Schneider, E.; Storzer, D.
1977-01-01
The lunar microcrater phenomenology is described. The morphology of the lunar craters is in almost all aspects simulated in laboratory experiments in the diameter range from less than 1 nu to several millimeters and up to 60 km/s impact velocity. An empirically derived formula is given for the conversion of crater diameters into projectile diameters and masses for given impact velocities and projectile and target densities. The production size frequency distribution for lunar craters in the crater size range from approximately 1 nu to several millimeters in diameter is derived from various microcrater measurements within a factor of up to 5. Particle track exposure age measurements for a variety of lunar samples have been performed. They allow the conversion of the lunar crater size frequency production distributions into particle fluxes. The development of crater populations on lunar rocks under self-destruction by subsequent meteoroid impacts and crater overlap is discussed and theoretically described. Erosion rates on lunar rocks on the order of several millimeters per 10 yr are calculated. Chemical investigations of the glass linings of lunar craters yield clear evidence of admixture of projectile material only in one case, where the remnants of an iron-nickel micrometeorite have been identified.
Pasmant, Eric; Parfait, Béatrice; Luscan, Armelle; Goussard, Philippe; Briand-Suleau, Audrey; Laurendeau, Ingrid; Fouveaut, Corinne; Leroy, Chrystel; Montadert, Annelore; Wolkenstein, Pierre; Vidaud, Michel; Vidaud, Dominique
2015-01-01
Molecular diagnosis of neurofibromatosis type 1 (NF1) is challenging owing to the large size of the tumour suppressor gene NF1, and the lack of mutation hotspots. A somatic alteration of the wild-type NF1 allele is observed in NF1-associated tumours. Genetic heterogeneity in NF1 was confirmed in patients with SPRED1 mutations. Here, we present a targeted next-generation sequencing (NGS) of NF1 and SPRED1 using a multiplex PCR approach (230 amplicons of ∼150 bp) on a PGM sequencer. The chip capacity allowed mixing 48 bar-coded samples in a 4-day workflow. We validated the NGS approach by retrospectively testing 30 NF1-mutated samples, and then prospectively analysed 279 patients in routine diagnosis. On average, 98.5% of all targeted bases were covered by at least 20X and 96% by at least 100X. An NF1 or SPRED1 alteration was found in 246/279 (88%) and 10/279 (4%) patients, respectively. Genotyping throughput was increased over 10 times, as compared with Sanger, with ∼90€ for consumables per sample. Interestingly, our targeted NGS approach also provided quantitative information based on sequencing depth allowing identification of multiexons deletion or duplication. We then addressed the NF1 somatic mutation detection sensitivity in mosaic NF1 patients and tumours. PMID:25074460
Birth cohorts in Asia: The importance, advantages, and disadvantages of different-sized cohorts.
Kishi, Reiko; Araki, Atsuko; Minatoya, Machiko; Itoh, Sachiko; Goudarzi, Houman; Miyashita, Chihiro
2018-02-15
Asia contains half of the world's children, and the countries of Asia are the most rapidly industrializing nations on the globe. Environmental threats to the health of children in Asia are myriad. Several birth cohorts were started in Asia in early 2000, and currently more than 30 cohorts in 13 countries have been established for study. Cohorts can contain from approximately 100-200 to 20,000-30,000 participants. Furthermore, national cohorts targeting over 100,000 participants have been launched in Japan and Korea. The aim of this manuscript is to discuss the importance of Asian cohorts, and the advantages and disadvantages of different-sized cohorts. As for case, one small-sized (n=514) cohort indicate that even relatively low level exposure to dioxin in utero could alter birth size, neurodevelopment, and immune and hormonal functions. Several Asian cohorts focus prenatal exposure to perfluoroalkyo substances and reported associations with birth size, thyroid hormone levels, allergies and neurodevelopment. Inconsistent findings may possibly be explained by the differences in exposure levels and target chemicals, and by possible statistical errors. In a smaller cohort, novel hypotheses or preliminary examinations are more easily verifiable. In larger cohorts, the etiology of rare diseases, such as birth defects, can be analyzed; however, they require a large cost and significant human resources. Therefore, conducting studies in only one large cohort may not always be the best strategy. International collaborations, such as the Birth Cohort Consortium of Asia, would cover the inherent limitation of sample size in addition to heterogeneity of exposure, ethnicity, and socioeconomic conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
Target-locking acquisition with real-time confocal (TARC) microscopy.
Lu, Peter J; Sims, Peter A; Oki, Hidekazu; Macarthur, James B; Weitz, David A
2007-07-09
We present a real-time target-locking confocal microscope that follows an object moving along an arbitrary path, even as it simultaneously changes its shape, size and orientation. This Target-locking Acquisition with Realtime Confocal (TARC) microscopy system integrates fast image processing and rapid image acquisition using a Nipkow spinning-disk confocal microscope. The system acquires a 3D stack of images, performs a full structural analysis to locate a feature of interest, moves the sample in response, and then collects the next 3D image stack. In this way, data collection is dynamically adjusted to keep a moving object centered in the field of view. We demonstrate the system's capabilities by target-locking freely-diffusing clusters of attractive colloidal particles, and activelytransported quantum dots (QDs) endocytosed into live cells free to move in three dimensions, for several hours. During this time, both the colloidal clusters and live cells move distances several times the length of the imaging volume.
Considerations for successful cosmogenic 3He dating in accessory phases
NASA Astrophysics Data System (ADS)
Amidon, W. H.; Farley, K. A.; Rood, D. H.
2008-12-01
We have been working to develop cosmogenic 3He dating of phases other than the commonly dated olivine and pyroxene, especially apatite and zircon. Recent work by Dunai et al. underscores that cosmogenic 3He dating is complicated by 3He production via 6Li(n,α) 3H --> 3He. The reacting thermal neutrons can be produced from three distinct sources; nucleogenic processes (3Henuc), muon interactions (3Hemu), and by high-energy "cosmogenic" neutrons (3Hecn). Accurate cosmogenic 3He dating requires determination of the relative fractions of Li-derived and spallation derived 3He. An important complication for the fine-grained phases we are investigating is that both spallation and the 6Li reaction eject high energy particles, with consequences for redistribution of 3He among phases in a rock. Although shielded samples can be used to estimate 3Henuc, they do not conatin the 3Hecn component produced in the near surface. To calculate this component, we propose a procedure in which the bulk rock chemistry, helium closure age, 3He concentration, grain size and Li content of the target mineral are measured in a shielded sample. The average Li content of the adjacent minerals can then be calculated, which in turn allows calculation of the 3Hecn component in surface exposed samples of the same lithology. If identical grain sizes are used in the shielded and surface exposed samples, then "effective" Li can be calculated directly from the shielded sample, and it may not be necessary to measure Li at all. To help validate our theoretical understanding of Li-3He production, and to constrain the geologic contexts in which cosmogenic 3He dating with zircon and apatite is likely to be successful, results are presented from four different field locations. For example, results from ~18 Ky old moraines in the Sierra Nevada show that the combination of low Li contents and high closure ages (>50 My) creates a small 3Hecn component (2%) but a large 3Henuc component (40-70%) for zircon and apatite. In contrast the combination of high Li contents and a young closure age (0.6 My) in rhyolite from the Coso volcanic field leads to a large 3Hecn component (30%) and small 3Henuc component (5%) in zircon. Analysis of samples from a variety of lithologies shows that zircon and apatite tend to be low in Li (1-10 ppm), but are vulnerable to implantation of 3He from adjacent minerals due to their small grain size, especially from minerals like biotite and hornblende. This point is well illustrated by data from both the Sierra Nevada and Coso examples, in which there is a strong correlation between grain size and 3He concentration for zircons due to implantation. In contrast, very large zircons (150>125 um width) obtained from shielded samples of the Shoshone Falls rhyolite (SW Idaho) do not contain a significant implanted component. Thus, successful 3He dating of accessory phases requires low Li content (<10 ppm) in the target mineral and either 1) low Li in adjacent minerals, or 2) the use of large grain sizes (>100 um). In high-Li cases, the fraction of 3Henuc is minimized in samples with young helium closure ages or longer duration of exposure. However because the 3Hecn/3Hespall ratio is fixed for a given Li content, longer exposure will not reduce the fraction of 3Hecn.
Moreano, Francisco; Busch, Ulrich; Engel, Karl-Heinz
2005-12-28
Milling fractions from conventional and transgenic corn were prepared at laboratory scale and used to study the influence of sample composition and heat-induced DNA degradation on the relative quantification of genetically modified organisms (GMO) in food products. Particle size distributions of the obtained fractions (coarse grits, regular grits, meal, and flour) were characterized using a laser diffraction system. The application of two DNA isolation protocols revealed a strong correlation between the degree of comminution of the milling fractions and the DNA yield in the extracts. Mixtures of milling fractions from conventional and transgenic material (1%) were prepared and analyzed via real-time polymerase chain reaction. Accurate quantification of the adjusted GMO content was only possible in mixtures containing conventional and transgenic material in the form of analogous milling fractions, whereas mixtures of fractions exhibiting different particle size distributions delivered significantly over- and underestimated GMO contents depending on their compositions. The process of heat-induced nucleic acid degradation was followed by applying two established quantitative assays showing differences between the lengths of the recombinant and reference target sequences (A, deltal(A) = -25 bp; B, deltal(B) = +16 bp; values related to the amplicon length of the reference gene). Data obtained by the application of method A resulted in underestimated recoveries of GMO contents in the samples of heat-treated products, reflecting the favored degradation of the longer target sequence used for the detection of the transgene. In contrast, data yielded by the application of method B resulted in increasingly overestimated recoveries of GMO contents. The results show how commonly used food technological processes may lead to distortions in the results of quantitative GMO analyses.
Carpenter, Joseph K; Andrews, Leigh A; Witcraft, Sara M; Powers, Mark B; Smits, Jasper A J; Hofmann, Stefan G
2018-06-01
The purpose of this study was to examine the efficacy of cognitive behavioral therapy (CBT) for anxiety-related disorders based on randomized placebo-controlled trials. We included 41 studies that randomly assigned patients (N = 2,843) with acute stress disorder, generalized anxiety disorder (GAD), obsessive compulsive disorder (OCD), panic disorder (PD), posttraumatic stress disorder (PTSD), or social anxiety disorder (SAD) to CBT or a psychological or pill placebo condition. Findings demonstrated moderate placebo-controlled effects of CBT on target disorder symptoms (Hedges' g = 0.56), and small to moderate effects on other anxiety symptoms (Hedges' g = 0.38), depression (Hedges' g = 0.31), and quality of life (Hedges' g = 0.30). Response rates in CBT compared to placebo were associated with an odds ratio of 2.97. Effects on the target disorder were significantly stronger for completer samples than intent-to-treat samples, and for individuals compared to group CBT in SAD and PTSD studies. Large effect sizes were found for OCD, GAD, and acute stress disorder, and small to moderate effect sizes were found for PTSD, SAD, and PD. In PTSD studies, dropout rates were greater in CBT (29.0%) compared to placebo (17.2%), but no difference in dropout was found across other disorders. Interventions primarily using exposure strategies had larger effect sizes than those using cognitive or cognitive and behavioral techniques, though this difference did not reach significance. Findings demonstrate that CBT is a moderately efficacious treatment for anxiety disorders when compared to placebo. More effective treatments are especially needed for PTSD, SAD, and PD. © 2018 Wiley Periodicals, Inc.
Object Classification With Joint Projection and Low-Rank Dictionary Learning.
Foroughi, Homa; Ray, Nilanjan; Hong Zhang
2018-02-01
For an object classification system, the most critical obstacles toward real-world applications are often caused by large intra-class variability, arising from different lightings, occlusion, and corruption, in limited sample sets. Most methods in the literature would fail when the training samples are heavily occluded, corrupted or have significant illumination or viewpoint variations. Besides, most of the existing methods and especially deep learning-based methods, need large training sets to achieve a satisfactory recognition performance. Although using the pre-trained network on a generic large-scale data set and fine-tune it to the small-sized target data set is a widely used technique, this would not help when the content of base and target data sets are very different. To address these issues simultaneously, we propose a joint projection and low-rank dictionary learning method using dual graph constraints. Specifically, a structured class-specific dictionary is learned in the low-dimensional space, and the discrimination is further improved by imposing a graph constraint on the coding coefficients, that maximizes the intra-class compactness and inter-class separability. We enforce structural incoherence and low-rank constraints on sub-dictionaries to reduce the redundancy among them, and also make them robust to variations and outliers. To preserve the intrinsic structure of data, we introduce a supervised neighborhood graph into the framework to make the proposed method robust to small-sized and high-dimensional data sets. Experimental results on several benchmark data sets verify the superior performance of our method for object classification of small-sized data sets, which include a considerable amount of different kinds of variation, and may have high-dimensional feature vectors.
Constraints on the Compositions of Small Planets from the HARPS-N Consortium
NASA Astrophysics Data System (ADS)
Charbonneau, David
2015-12-01
HARPS-N is an ultra-stable fiber-fed high-resolution spectrograph optimized for the measurement of very precise radial velocities. The NASA Kepler Mission has demonstrated that planets with radii between 1 - 2.5 that of the Earth are common around Sun-like stars. A chief objective of the HARPS-N Consortium is to measure accurately the masses and infer compositions for a sample of these small worlds. Here I report on our conclusions from the first three years. After analyzing the Kepler light curves to vet potential targets, favoring those with asteroseismic estimates of the stellar properties and excluding those likely to show high RV jitter, we lavished attention on our sample: We typically gathered 100 observations per target, which permitted a mass accuracy of better than 20%. We find that all planets smaller than 1.5 Earth radii are rocky, while we have yet to find a rocky planet larger than this size. I report on the resulting constraints on the planetary compositions, including previously unpublished estimates for several worlds. Comparison of the inferred iron-to-rock ratios to the spectroscopically determined abundances of Fe, Mg, and Si in the stellar atmospheres should provide insight into the formation of terrestrial worlds. I address the transition from rocky planets to Neptune-like worlds, noting that our targets are highly irradiated and hence have likely experienced atmospheric mass loss. The K2 and TESS Missions will provide a list of similarly sized planets around much brighter stars, for which the greater apparent brightness will permit us to measure densities of planets at longer orbital periods, where atmospheric escape will be less important.
An efficient biosensor made of an electromagnetic trap and a magneto-resistive sensor.
Li, Fuquan; Kosel, Jürgen
2014-09-15
Magneto-resistive biosensors have been found to be useful because of their high sensitivity, low cost, small size, and direct electrical output. They use super-paramagnetic beads to label a biological target and detect it via sensing the stray field. In this paper, we report a new setup for magnetic biosensors, replacing the conventional "sandwich" concept with an electromagnetic trap. We demonstrate the capability of the biosensor in the detection of E. coli. The trap is formed by a current-carrying microwire that attracts the magnetic beads into a sensing space on top of a tunnel magneto-resistive sensor. The sensor signal depends on the number of beads in the sensing space, which depends on the size of the beads. This enables the detection of biological targets, because such targets increase the volume of the beads. Experiments were carried out with a 6 µm wide microwire, which attracted the magnetic beads from a distance of 60 μm, when a current of 30 mA was applied. A sensing space of 30 µm in length and 6 µm in width was defined by the magnetic sensor. The results showed that individual E. coli bacterium inside the sensing space could be detected using super-paramagnetic beads that are 2.8 µm in diameter. The electromagnetic trap setup greatly simplifies the device and reduces the detection process to two steps: (i) mixing the bacteria with magnetic beads and (ii) applying the sample solution to the sensor for measurement, which can be accomplished within about 30 min with a sample volume in the µl range. This setup also ensures that the biosensor can be cleaned easily and re-used immediately. The presented setup is readily integrated on chips via standard microfabrication techniques. Copyright © 2014 Elsevier B.V. All rights reserved.
Further analysis of LDEF FRECOPA micrometeroid remnants
NASA Technical Reports Server (NTRS)
Borg, J.; Bunch, T. E.; Radicatidibrozolo, Filippo
1992-01-01
Experiments dedicated to the detection of interplanetary dust particles (IDP's) were exposed within the FRECOPA payload, installed on the face of the LDEF directly opposed to the velocity vector (west facing direction, location B3). We were mainly interested in the analysis of hypervelocity impact features of sizes less than or = 10 microns, found in thick Al targets devoted to the research of impact features. In the 15 craters found in the scanned area (approximately 4 sq. cm), the chemical analysis suggests an extraterrestrial origin for the impacting particles. The main elements we identified are usually refered to as chondrite elements: Na, Mg, Si, S, Ca, and Fe are found in various proportions, intrinsic Al being masked by the Al target; we notice a strong depletion in Ni, never observed in our samples. Furthermore, C and O are present in 90 percent of the cases; the C/O peak height ratio varies from 0.1 to 3. Impactor simulations by light gas gun hypervelocity impact experiments have shown that meaningful biogenic element and compound information maybe obtained from IDP residues below impacts of critical velocities, that are less than or = 4 km/sec for particles larger than 100 microns in diameter. Our results obtained for the smaller size fraction IDP's suggest that at such sizes, the critical velocity could be higher by a factor of 2 or 3, as chemical analysis of the remnants were possible in all the identified impact craters, performed on targets possibly hit at velocities greater than or = 7.5 km/s, which is the spacecraft velocity. These samples are now subjected to an imagery and analytical protocol that includes FESEM (field emission scanning electron microscopy) and LIMS (laser ionization mass spectrometry). The LIMS analyses were performed using the LIMA-ZA instrument. Results are presented, clearly indicating that such small events show crater features analogous to what is observed at larger sizes; our first analytical results, obtained for 2 events (P6 and P10) suggest that N is present in the IDP's remnants in which C and O were identified by EDX analysis. In one case (P6), enrichment in K and P is observed. Surface contamination by NaCl is evident on the FRECOPA surfaces.
Transiting Exoplanet Survey Satellite (TESS)
NASA Technical Reports Server (NTRS)
Ricker, G. R.; Clampin, M.; Latham, D. W.; Seager, S.; Vanderspek, R. K.; Villasenor, J. S.; Winn, J. N.
2012-01-01
The Transiting Exoplanet Survey Satellite (TESS) will discover thousands of exoplanets in orbit around the brightest stars in the sky. In a two-year survey, TESS will monitor more than 500,000 stars for temporary drops in brightness caused by planetary transits. This first-ever spaceborne all-sky transit survey will identify planets ranging from Earth-sized to gas giants, around a wide range of stellar types and orbital distances. No ground-based survey can achieve this feat. A large fraction of TESS target stars will be 30-100 times brighter than those observed by Kepler satellite, and therefore TESS . planets will be far easier to characterize with follow-up observations. TESS will make it possible to study the masses, sizes, densities, orbits, and atmospheres of a large cohort of small planets, including a sample of rocky worlds in the habitable zones of their host stars. TESS will provide prime targets for observation with the James Webb Space Telescope (JWST), as well as other large ground-based and space-based telescopes of the future. TESS data will be released with minimal delay (no proprietary period), inviting immediate community-wide efforts to study the new planets. The TESS legacy will be a catalog of the very nearest and brightest main-sequence stars hosting transiting exoplanets, thus providing future observers with the most favorable targets for detailed investigations.
NASA Astrophysics Data System (ADS)
Hiroi, T.; Kaiden, H.; Yamaguchi, A.; Kojima, H.; Uemoto, K.; Ohtake, M.; Arai, T.; Sasaki, S.
2016-12-01
Lunar meteorite chip samples recovered by the National Institute of Polar Research (NIPR) have been studied by a UV-visible-near-infrared spectrometer, targeting small areas of about 3 × 2 mm in size. Rock types and approximate mineral compositions of studied meteorites have been identified or obtained through this spectral survey with no sample preparation required. A linear deconvolution method was used to derive end-member mineral spectra from spectra of multiple clasts whenever possible. In addition, the modified Gaussian model was used in an attempt of deriving their major pyroxene compositions. This study demonstrates that a visible-near-infrared spectrometer on a lunar rover would be useful for identifying these kinds of unaltered (non-space-weathered) lunar rocks. In order to prepare for such a future mission, further studies which utilize a smaller spot size are desired for improving the accuracy of identifying the clasts and mineral phases of the rocks.
Bogdanova, Yelena; Yee, Megan K; Ho, Vivian T; Cicerone, Keith D
Comprehensive review of the use of computerized treatment as a rehabilitation tool for attention and executive function in adults (aged 18 years or older) who suffered an acquired brain injury. Systematic review of empirical research. Two reviewers independently assessed articles using the methodological quality criteria of Cicerone et al. Data extracted included sample size, diagnosis, intervention information, treatment schedule, assessment methods, and outcome measures. A literature review (PubMed, EMBASE, Ovid, Cochrane, PsychINFO, CINAHL) generated a total of 4931 publications. Twenty-eight studies using computerized cognitive interventions targeting attention and executive functions were included in this review. In 23 studies, significant improvements in attention and executive function subsequent to training were reported; in the remaining 5, promising trends were observed. Preliminary evidence suggests improvements in cognitive function following computerized rehabilitation for acquired brain injury populations including traumatic brain injury and stroke. Further studies are needed to address methodological issues (eg, small sample size, inadequate control groups) and to inform development of guidelines and standardized protocols.
Park, Bo Youn; Kim, Sujin; Cho, Yang Seok
2018-02-01
The congruency effect of a task-irrelevant distractor has been found to be modulated by task-relevant set size and display set size. The present study used a psychological refractory period (PRP) paradigm to examine the cognitive loci of the display set size effect (dilution effect) and the task-relevant set size effect (perceptual load effect) on distractor interference. A tone discrimination task (Task 1), in which a response was made to the pitch of the target tone, was followed by a letter discrimination task (Task 2) in which different types of visual target display were used. In Experiment 1, in which display set size was manipulated to examine the nature of the display set size effect on distractor interference in Task 2, the modulation of the congruency effect by display set size was observed at both short and long stimulus-onset asynchronies (SOAs), indicating that the display set size effect occurred after the target was selected for processing in the focused attention stage. In Experiment 2, in which task-relevant set size was manipulated to examine the nature of the task-relevant set size effect on distractor interference in Task 2, the effects of task-relevant set size increased with SOA, suggesting that the target selection efficiency in the preattentive stage was impaired with increasing task-relevant set size. These results suggest that display set size and task-relevant set size modulate distractor processing in different ways.
Application of Laser Mass Spectrometry to Art and Archaeology
NASA Technical Reports Server (NTRS)
Gulian, Lase Lisa E.; Callahan, Michael P.; Muliadi, Sarah; Owens, Shawn; McGovern, Patrick E.; Schmidt, Catherine M.; Trentelman, Karen A.; deVries, Mattanjah S.
2011-01-01
REMPI laser mass spectrometry is a combination of resonance enhanced multiphoton ionization spectroscopy and time of flight mass spectrometry, This technique enables the collection of mass specific optical spectra as well as of optically selected mass spectra. Analytes are jet-cooled by entrainment in a molecular beam, and this low temperature gas phase analysis has the benefit of excellent vibronic resolution. Utilizing this method, mass spectrometric analysis of historically relevant samples can be simplified and improved; Optical selection of targets eliminates the need for chromatography while knowledge of a target's gas phase spectroscopy allows for facile differentiation of molecules that are in the aqueous phase considered spectroscopically indistinguishable. These two factors allow smaller sample sizes than commercial MS instruments, which in turn will require less damage to objects of antiquity. We have explored methods to optimize REMPI laser mass spectrometry as an analytical tool to archaeology using theobromine and caffeine as molecular markers in Mesoamerican pottery, and are expanding this approach to the field of art to examine laccaic acid in shellacs.
Competitive Deep-Belief Networks for Underwater Acoustic Target Recognition
Shen, Sheng; Yao, Xiaohui; Sheng, Meiping; Wang, Chen
2018-01-01
Underwater acoustic target recognition based on ship-radiated noise belongs to the small-sample-size recognition problems. A competitive deep-belief network is proposed to learn features with more discriminative information from labeled and unlabeled samples. The proposed model consists of four stages: (1) A standard restricted Boltzmann machine is pretrained using a large number of unlabeled data to initialize its parameters; (2) the hidden units are grouped according to categories, which provides an initial clustering model for competitive learning; (3) competitive training and back-propagation algorithms are used to update the parameters to accomplish the task of clustering; (4) by applying layer-wise training and supervised fine-tuning, a deep neural network is built to obtain features. Experimental results show that the proposed method can achieve classification accuracy of 90.89%, which is 8.95% higher than the accuracy obtained by the compared methods. In addition, the highest accuracy of our method is obtained with fewer features than other methods. PMID:29570642
Illuminator, a desktop program for mutation detection using short-read clonal sequencing.
Carr, Ian M; Morgan, Joanne E; Diggle, Christine P; Sheridan, Eamonn; Markham, Alexander F; Logan, Clare V; Inglehearn, Chris F; Taylor, Graham R; Bonthron, David T
2011-10-01
Current methods for sequencing clonal populations of DNA molecules yield several gigabases of data per day, typically comprising reads of < 100 nt. Such datasets permit widespread genome resequencing and transcriptome analysis or other quantitative tasks. However, this huge capacity can also be harnessed for the resequencing of smaller (gene-sized) target regions, through the simultaneous parallel analysis of multiple subjects, using sample "tagging" or "indexing". These methods promise to have a huge impact on diagnostic mutation analysis and candidate gene testing. Here we describe a software package developed for such studies, offering the ability to resolve pooled samples carrying barcode tags and to align reads to a reference sequence using a mutation-tolerant process. The program, Illuminator, can identify rare sequence variants, including insertions and deletions, and permits interactive data analysis on standard desktop computers. It facilitates the effective analysis of targeted clonal sequencer data without dedicated computational infrastructure or specialized training. Copyright © 2011 Elsevier Inc. All rights reserved.
Young Women’s Dynamic Family Size Preferences in the Context of Transitioning Fertility
Yeatman, Sara; Sennott, Christie; Culpepper, Steven
2013-01-01
Dynamic theories of family size preferences posit that they are not a fixed and stable goal but rather are akin to a moving target that changes within individuals over time. Nonetheless, in high-fertility contexts, changes in family size preferences tend to be attributed to low construct validity and measurement error instead of genuine revisions in preferences. To address the appropriateness of this incongruity, the present study examines evidence for the sequential model of fertility among a sample of young Malawian women living in a context of transitioning fertility. Using eight waves of closely spaced data and fixed-effects models, we find that these women frequently change their reported family size preferences and that these changes are often associated with changes in their relationship and reproductive circumstances. The predictability of change gives credence to the argument that ideal family size is a meaningful construct, even in this higher-fertility setting. Changes are not equally predictable across all women, however, and gamma regression results demonstrate that women for whom reproduction is a more distant goal change their fertility preferences in less-predictable ways. PMID:23619999
Young women's dynamic family size preferences in the context of transitioning fertility.
Yeatman, Sara; Sennott, Christie; Culpepper, Steven
2013-10-01
Dynamic theories of family size preferences posit that they are not a fixed and stable goal but rather are akin to a moving target that changes within individuals over time. Nonetheless, in high-fertility contexts, changes in family size preferences tend to be attributed to low construct validity and measurement error instead of genuine revisions in preferences. To address the appropriateness of this incongruity, the present study examines evidence for the sequential model of fertility among a sample of young Malawian women living in a context of transitioning fertility. Using eight waves of closely spaced data and fixed-effects models, we find that these women frequently change their reported family size preferences and that these changes are often associated with changes in their relationship and reproductive circumstances. The predictability of change gives credence to the argument that ideal family size is a meaningful construct, even in this higher-fertility setting. Changes are not equally predictable across all women, however, and gamma regression results demonstrate that women for whom reproduction is a more distant goal change their fertility preferences in less-predictable ways.
Suhonen, Riitta; Stolt, Minna; Katajisto, Jouko; Leino-Kilpi, Helena
2015-12-01
To report a review of quality regarding sampling, sample and data collection procedures of empirical nursing research of ethical climate studies where nurses were informants. Surveys are needed to obtain generalisable information about topics sensitive to nursing. Methodological quality of the studies is of key concern, especially the description of sampling and data collection procedures. Methodological literature review. Using the electronic MEDLINE database, empirical nursing research articles focusing on ethical climate were accessed in 2013 (earliest-22 November 2013). Using the search terms 'ethical' AND ('climate*' OR 'environment*') AND ('nurse*' OR 'nursing'), 376 citations were retrieved. Based on a four-phase retrieval process, 26 studies were included in the detailed analysis. Sampling method was reported in 58% of the studies, and it was random in a minority of the studies (26%). The identification of target sample and its size (92%) was reported, whereas justification for sample size was less often given. In over two-thirds (69%) of the studies with identifiable response rate, it was below 75%. A variety of data collection procedures were used with large amount of missing data about the details of who distributed, recruited and collected the questionnaires. Methods to increase response rates were seldom described. Discussion about nonresponse, representativeness of the sample and generalisability of the results was missing in many studies. This review highlights the methodological challenges and developments that need to be considered in ensuring the use of valid information in developing health care through research findings. © 2015 Nordic College of Caring Science.
Mindfulness Meditation for Substance Use Disorders: A Systematic Review
Zgierska, Aleksandra; Rabago, David; Chawla, Neharika; Kushner, Kenneth; Koehler, Robert; Marlatt, Allan
2009-01-01
Relapse is common in substance use disorders (SUDs), even among treated individuals. The goal of this article was to systematically review the existing evidence on mindfulness meditation-based interventions (MM) for SUDs. The comprehensive search for and review of literature found over 2,000 abstracts and resulted in 25 eligible manuscripts (22 published, 3 unpublished: 8 RCTs, 7 controlled non-randomized, 6 non-controlled prospective, 2 qualitative studies, 1 case report). When appropriate, methodological quality, absolute risk reduction, number needed to treat, and effect size (ES) were assessed. Overall, although preliminary evidence suggests MM efficacy and safety, conclusive data for MM as a treatment of SUDs are lacking. Significant methodological limitations exist in most studies. Further, it is unclear which persons with SUDs might benefit most from MM. Future trials must be of sufficient sample size to answer a specific clinical question and should target both assessment of effect size and mechanisms of action. PMID:19904664
NASA Astrophysics Data System (ADS)
Rubin, Alan E.; Ziegler, Karen; Young, Edward D.
2008-02-01
Literature data demonstrate that on a global, asteroid-wide scale (plausibly on the order of 100 km), ordinary chondrites (OC) have heterogeneous oxidation states and O-isotopic compositions (represented, respectively, by the mean olivine Fa and bulk Δ 17O compositions of equilibrated samples). Samples analyzed here include: (a) two H5 chondrite Antarctic finds (ALHA79046 and TIL 82415) that have the same cosmic-ray exposure age (7.6 Ma) and were probably within ˜1 km of each other when they were excavated from the H-chondrite parent body, (b) different individual stones from the Holbrook L/LL6 fall that were probably within ˜1 m of each other when their parent meteoroid penetrated the Earth's atmosphere, and (c) drill cores from a large slab of the Estacado H6 find located within a few tens of centimeters of each other. Our results indicate that OC are heterogeneous in their bulk oxidation state and O-isotopic composition on 100-km-size scales, but homogeneous on meter-, decimeter- and centimeter-size scales. (On kilometer size scales, oxidation state is heterogeneous, but O isotopes appear to be homogeneous.) The asteroid-wide heterogeneity in oxidation state and O-isotopic composition was inherited from the solar nebula. The homogeneity on small size scales was probably caused in part by fluid-assisted metamorphism and mainly by impact-gardening processes (which are most effective at mixing target materials on scales of ⩽1 m).
Method And Apparatus For Detecting Chemical Binding
Warner, Benjamin P.; Havrilla, George J.; Miller, Thomasin C.; Wells, Cyndi A.
2005-02-22
The method for screening binding between a target binder and potential pharmaceutical chemicals involves sending a solution (preferably an aqueous solution) of the target binder through a conduit to a size exclusion filter, the target binder being too large to pass through the size exclusion filter, and then sending a solution of one or more potential pharmaceutical chemicals (preferably an aqueous solution) through the same conduit to the size exclusion filter after target binder has collected on the filter. The potential pharmaceutical chemicals are small enough to pass through the filter. Afterwards, x-rays are sent from an x-ray source to the size exclusion filter, and if the potential pharmaceutical chemicals form a complex with the target binder, the complex produces an x-ray fluorescence signal having an intensity that indicates that a complex has formed.
Method and apparatus for detecting chemical binding
Warner, Benjamin P [Los Alamos, NM; Havrilla, George J [Los Alamos, NM; Miller, Thomasin C [Los Alamos, NM; Wells, Cyndi A [Los Alamos, NM
2007-07-10
The method for screening binding between a target binder and potential pharmaceutical chemicals involves sending a solution (preferably an aqueous solution) of the target binder through a conduit to a size exclusion filter, the target binder being too large to pass through the size exclusion filter, and then sending a solution of one or more potential pharmaceutical chemicals (preferably an aqueous solution) through the same conduit to the size exclusion filter after target binder has collected on the filter. The potential pharmaceutical chemicals are small enough to pass through the filter. Afterwards, x-rays are sent from an x-ray source to the size exclusion filter, and if the potential pharmaceutical chemicals form a complex with the target binder, the complex produces an x-ray fluorescence signal having an intensity that indicates that a complex has formed.
Automated vehicle detection in forward-looking infrared imagery.
Der, Sandor; Chan, Alex; Nasrabadi, Nasser; Kwon, Heesung
2004-01-10
We describe an algorithm for the detection and clutter rejection of military vehicles in forward-looking infrared (FLIR) imagery. The detection algorithm is designed to be a prescreener that selects regions for further analysis and uses a spatial anomaly approach that looks for target-sized regions of the image that differ in texture, brightness, edge strength, or other spatial characteristics. The features are linearly combined to form a confidence image that is thresholded to find likely target locations. The clutter rejection portion uses target-specific information extracted from training samples to reduce the false alarms of the detector. The outputs of the clutter rejecter and detector are combined by a higher-level evidence integrator to improve performance over simple concatenation of the detector and clutter rejecter. The algorithm has been applied to a large number of FLIR imagery sets, and some of these results are presented here.
The Time-domain Spectroscopic Survey: Target Selection for Repeat Spectroscopy
NASA Astrophysics Data System (ADS)
MacLeod, Chelsea L.; Green, Paul J.; Anderson, Scott F.; Eracleous, Michael; Ruan, John J.; Runnoe, Jessie; Nielsen Brandt, William; Badenes, Carles; Greene, Jenny; Morganson, Eric; Schmidt, Sarah J.; Schwope, Axel; Shen, Yue; Amaro, Rachael; Lebleu, Amy; Filiz Ak, Nurten; Grier, Catherine J.; Hoover, Daniel; McGraw, Sean M.; Dawson, Kyle; Hall, Patrick B.; Hawley, Suzanne L.; Mariappan, Vivek; Myers, Adam D.; Pâris, Isabelle; Schneider, Donald P.; Stassun, Keivan G.; Bershady, Matthew A.; Blanton, Michael R.; Seo, Hee-Jong; Tinker, Jeremy; Fernández-Trincado, J. G.; Chambers, Kenneth; Kaiser, Nick; Kudritzki, R.-P.; Magnier, Eugene; Metcalfe, Nigel; Waters, Chris Z.
2018-01-01
As astronomers increasingly exploit the information available in the time domain, spectroscopic variability in particular opens broad new channels of investigation. Here we describe the selection algorithms for all targets intended for repeat spectroscopy in the Time Domain Spectroscopic Survey (TDSS), part of the extended Baryon Oscillation Spectroscopic Survey within the Sloan Digital Sky Survey (SDSS)-IV. Also discussed are the scientific rationale and technical constraints leading to these target selections. The TDSS includes a large “repeat quasar spectroscopy” (RQS) program delivering ∼13,000 repeat spectra of confirmed SDSS quasars, and several smaller “few-epoch spectroscopy” (FES) programs targeting specific classes of quasars as well as stars. The RQS program aims to provide a large and diverse quasar data set for studying variations in quasar spectra on timescales of years, a comparison sample for the FES quasar programs, and an opportunity for discovering rare, serendipitous events. The FES programs cover a wide variety of phenomena in both quasars and stars. Quasar FES programs target broad absorption line quasars, high signal-to-noise ratio normal broad line quasars, quasars with double-peaked or very asymmetric broad emission line profiles, binary supermassive black hole candidates, and the most photometrically variable quasars. Strongly variable stars are also targeted for repeat spectroscopy, encompassing many types of eclipsing binary systems, and classical pulsators like RR Lyrae. Other stellar FES programs allow spectroscopic variability studies of active ultracool dwarf stars, dwarf carbon stars, and white dwarf/M dwarf spectroscopic binaries. We present example TDSS spectra and describe anticipated sample sizes and results.
Nguyen, Duong Duy; Kenny, Dianna T
2009-11-01
Muscle tension dysphonia (MTD) is a voice disorder with deteriorated vocal quality, particularly pitch problems. Because pitch is mainly controlled by the laryngeal muscles, and because MTD is characterized by increased laryngeal muscle tension, we hypothesized that it may result in problems in pitch target implementation in tonal languages. We examined tonal samples of 42 Vietnamese female primary school teachers diagnosed with MTD and compared them with 30 vocally healthy female teachers who spoke the same dialect. Tonal data were analyzed using Computerized Speech Lab (CSL-4300B) for Windows. From tonal sampling bases, fundamental frequency (F0) was measured at target points specified by contour examination. Parameters representing pitch movement including time, size, and speed of movement were measured for the falling tone and rising tone. We found that F0 at target points in MTD group was lowered in most tones, especially tones with extensive F0 variation. In MTD group, target F0 of the broken tone in isolation was 37.5 Hz lower (P<0.01) and target F0 of rising tone in isolation was 46 Hz lower (P<0.01) than in control group. In MTD group, speed of pitch fall of the falling tone in isolation was faster than control group by 2.2 semitones/second (st/s) (P<0.05) and speed of pitch rise in the rising tone in isolation was slower than control group by 7.2 st/s (P<0.01). These results demonstrate that MTD is associated with problems in tonal pitch variation.
Zheng, Xianlin; Lu, Yiqing; Zhao, Jiangbo; Zhang, Yuhai; Ren, Wei; Liu, Deming; Lu, Jie; Piper, James A; Leif, Robert C; Liu, Xiaogang; Jin, Dayong
2016-01-19
Compared with routine microscopy imaging of a few analytes at a time, rapid scanning through the whole sample area of a microscope slide to locate every single target object offers many advantages in terms of simplicity, speed, throughput, and potential for robust quantitative analysis. Existing techniques that accommodate solid-phase samples incorporating individual micrometer-sized targets generally rely on digital microscopy and image analysis, with intrinsically low throughput and reliability. Here, we report an advanced on-the-fly stage scanning method to achieve high-precision target location across the whole slide. By integrating X- and Y-axis linear encoders to a motorized stage as the virtual "grids" that provide real-time positional references, we demonstrate an orthogonal scanning automated microscopy (OSAM) technique which can search a coverslip area of 50 × 24 mm(2) in just 5.3 min and locate individual 15 μm lanthanide luminescent microspheres with standard deviations of 1.38 and 1.75 μm in X and Y directions. Alongside implementation of an autofocus unit that compensates the tilt of a slide in the Z-axis in real time, we increase the luminescence detection efficiency by 35% with an improved coefficient of variation. We demonstrate the capability of advanced OSAM for robust quantification of luminescence intensities and lifetimes for a variety of micrometer-scale luminescent targets, specifically single down-shifting and upconversion microspheres, crystalline microplates, and color-barcoded microrods, as well as quantitative suspension array assays of biotinylated-DNA functionalized upconversion nanoparticles.
Freeform lens generation for quasi-far-field successive illumination targets
NASA Astrophysics Data System (ADS)
Zhuang, Zhenfeng; Thibault, Simon
2018-07-01
A predefined mapping to tailor one or more freeform surfaces is employed to build a freeform illumination system. The emergent rays from the light source corresponding to the prescribed target mesh for a pre-determined lighting distance are mapped by a point-to-point algorithm with respect to the freeform optics, which involves limiting design flexibility. To tackle the problem of design limitation and find the optimum design results, a freeform lens is exploited to produce the desired rectangular illumination distribution at successive target planes at quasi-far-field lighting distances. It is generated using numerical solutions to find out an initial starting point, and an appropriate approach to obtain variables for parameterization of the freeform surface is introduced. The relative standard deviation, which is a useful figure of merit for the analysis, is set up as merit function with respect to illumination non-uniformity at the successive sampled target planes. Therefore, the irradiance distribution in terms of the specific lighting distance range can be ensured by the proposed scheme. A design example of a freeform illumination system, composed of a spherical surface and a freeform surface, is given to produce desired irradiance distribution within the lighting distance range. An optical performance with low non-uniformity and high efficiency is achieved. Compared with the conventional approach, the uniformity of the sampled targets is dramatically enhanced; meanwhile, a design result with a large tolerance of LED size is offered.
Pearce, Michael; Hee, Siew Wan; Madan, Jason; Posch, Martin; Day, Simon; Miller, Frank; Zohar, Sarah; Stallard, Nigel
2018-02-08
Most confirmatory randomised controlled clinical trials (RCTs) are designed with specified power, usually 80% or 90%, for a hypothesis test conducted at a given significance level, usually 2.5% for a one-sided test. Approval of the experimental treatment by regulatory agencies is then based on the result of such a significance test with other information to balance the risk of adverse events against the benefit of the treatment to future patients. In the setting of a rare disease, recruiting sufficient patients to achieve conventional error rates for clinically reasonable effect sizes may be infeasible, suggesting that the decision-making process should reflect the size of the target population. We considered the use of a decision-theoretic value of information (VOI) method to obtain the optimal sample size and significance level for confirmatory RCTs in a range of settings. We assume the decision maker represents society. For simplicity we assume the primary endpoint to be normally distributed with unknown mean following some normal prior distribution representing information on the anticipated effectiveness of the therapy available before the trial. The method is illustrated by an application in an RCT in haemophilia A. We explicitly specify the utility in terms of improvement in primary outcome and compare this with the costs of treating patients, both financial and in terms of potential harm, during the trial and in the future. The optimal sample size for the clinical trial decreases as the size of the population decreases. For non-zero cost of treating future patients, either monetary or in terms of potential harmful effects, stronger evidence is required for approval as the population size increases, though this is not the case if the costs of treating future patients are ignored. Decision-theoretic VOI methods offer a flexible approach with both type I error rate and power (or equivalently trial sample size) depending on the size of the future population for whom the treatment under investigation is intended. This might be particularly suitable for small populations when there is considerable information about the patient population.
Bergh, Daniel
2015-01-01
Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.
Reported shoes size during GH therapy: is foot overgrowth a myth or reality?
Lago, Débora C F; Coutinho, Cláudia A; Kochi, Cristiane; Longui, Carlos A
2015-10-01
To describe population reference values for shoes size, and to identify possible disproportional foot growth during GH therapy. Construction of percentile chart based on 3,651 controls (male: 1,838; female: 1,813). The GH treated group included 13 children with idiopathic short stature (ISS) and 50 children with normal height, but with height prediction below their target height; male: 26 and female: 37 mean ± SD age 13.3 ± 1.9 and 12.9 ± 1.5 years, respectively. GH (0.05 mg/kg/day) was used for 3.2 ± 1.6 years, ranging from 1.0-10.3 years. Height expressed as SDS, target height (TH) SDS, self-reported shoes size and target shoes size (TSS) SDS were recorded. Reference values were established showed as a foot SDS calculator available online at www.clinicalcaselearning.com/v2. Definitive shoes size was attained in controls at mean age of 13y in girls and 14y in boys (average values 37 and 40, respectively). In the study group, shoes size was -0.15 ± 0.9 and -0.02 ± 1.3 SDS, with target feet of 0.08 ± 0.8 and -0.27 ± 0.7 SDS in males and females, respectively. There was a significant positive correlation between shoes size and familial TSS, between shoes size and height and between TSS and TH. There was no correlation between duration of GH treatment and shoes size. Our data suggest that during long-term treatment with GH, patients maintain proportional growth in shoes size and height, and the expected correlation with the familial target. We conclude that there is no excessive increase in the size of foot as estimated by the size of shoes in individuals under long term GH therapy.
Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos
2014-04-01
This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.
Erus, Guray; Zacharaki, Evangelia I.; Davatzikos, Christos
2014-01-01
This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a “target-specific” feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject’s images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an “estimability” criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. PMID:24607564
Bed net ownership in Kenya: the impact of 3.4 million free bed nets
2010-01-01
Background In July and September 2006, 3.4 million long-lasting insecticide-treated bed nets (LLINs) were distributed free in a campaign targeting children 0-59 months old (CU5s) in the 46 districts with malaria in Kenya. A survey was conducted one month after the distribution to evaluate who received campaign LLINs, who owned insecticide-treated bed nets and other bed nets received through other channels, and how these nets were being used. The feasibility of a distribution strategy aimed at a high-risk target group to meet bed net ownership and usage targets is evaluated. Methods A stratified, two-stage cluster survey sampled districts and enumeration areas with probability proportional to size. Handheld computers (PDAs) with attached global positioning systems (GPS) were used to develop the sampling frame, guide interviewers back to chosen households, and collect survey data. Results In targeted areas, 67.5% (95% CI: 64.6, 70.3%) of all households with CU5s received campaign LLINs. Including previously owned nets, 74.4% (95% CI: 71.8, 77.0%) of all households with CU5s had an ITN. Over half of CU5s (51.7%, 95% CI: 48.8, 54.7%) slept under an ITN during the previous evening. Nearly forty percent (39.1%) of all households received a campaign net, elevating overall household ownership of ITNs to 50.7% (95% CI: 48.4, 52.9%). Conclusions The campaign was successful in reaching the target population, families with CU5s, the risk group most vulnerable to malaria. Targeted distribution strategies will help Kenya approach indicator targets, but will need to be combined with other strategies to achieve desired population coverage levels. PMID:20576145
Bed net ownership in Kenya: the impact of 3.4 million free bed nets.
Hightower, Allen; Kiptui, Rebecca; Manya, Ayub; Wolkon, Adam; Vanden Eng, Jodi Leigh; Hamel, Mary; Noor, Abdisalan; Sharif, Shahnaz K; Buluma, Robert; Vulule, John; Laserson, Kayla; Slutsker, Laurence; Akhwale, Willis
2010-06-24
In July and September 2006, 3.4 million long-lasting insecticide-treated bed nets (LLINs) were distributed free in a campaign targeting children 0-59 months old (CU5s) in the 46 districts with malaria in Kenya. A survey was conducted one month after the distribution to evaluate who received campaign LLINs, who owned insecticide-treated bed nets and other bed nets received through other channels, and how these nets were being used. The feasibility of a distribution strategy aimed at a high-risk target group to meet bed net ownership and usage targets is evaluated. A stratified, two-stage cluster survey sampled districts and enumeration areas with probability proportional to size. Handheld computers (PDAs) with attached global positioning systems (GPS) were used to develop the sampling frame, guide interviewers back to chosen households, and collect survey data. In targeted areas, 67.5% (95% CI: 64.6, 70.3%) of all households with CU5s received campaign LLINs. Including previously owned nets, 74.4% (95% CI: 71.8, 77.0%) of all households with CU5s had an ITN. Over half of CU5s (51.7%, 95% CI: 48.8, 54.7%) slept under an ITN during the previous evening. Nearly forty percent (39.1%) of all households received a campaign net, elevating overall household ownership of ITNs to 50.7% (95% CI: 48.4, 52.9%). The campaign was successful in reaching the target population, families with CU5s, the risk group most vulnerable to malaria. Targeted distribution strategies will help Kenya approach indicator targets, but will need to be combined with other strategies to achieve desired population coverage levels.
Terrestrial-passage theory: failing a test.
Reed, Charles F; Krupinski, Elizabeth A
2009-01-01
Terrestrial-passage theory proposes that the 'moon' and 'sky' illusions occur because observers learn to expect an elevation-dependent transformation of visual angle. The transformation accompanies daily movement through ordinary environments of fixed-altitude objects. Celestial objects display the same visual angle at all elevations, and hence are necessarily non-conforming with the ordinary transformation. On hypothesis, observers should target angular sizes to appear greater at elevation than at horizon. However, in a sample of forty-eight observers there was no significant difference between the perceived angular size of a constellation of stars at horizon and that predicted for a specific elevation. Occurrence of the illusion was not restricted to those observers who expected angular expansion. These findings fail to support the terrestrial-passage theory of the illusion.
A sensitive EUV Schwarzschild microscope for plasma studies with sub-micrometer resolution
Zastrau, U.; Rodel, C.; Nakatsutsumi, M.; ...
2018-02-05
We present an extreme ultraviolet (EUV) microscope using a Schwarzschild objective which is optimized for single-shot sub-micrometer imaging of laser-plasma targets. The microscope has been designed and constructed for imaging the scattering from an EUV-heated solid-density hydrogen jet. Here, imaging of a cryogenic hydrogen target was demonstrated using single pulses of the free-electron laser in Hamburg (FLASH) free-electron laser at a wavelength of 13.5 nm. In a single exposure, we observe a hydrogen jet with ice fragments with a spatial resolution in the sub-micrometer range. In situ EUV imaging is expected to enable novel experimental capabilities for warm dense mattermore » studies of micrometer-sized samples in laser-plasma experiments.« less
A sensitive EUV Schwarzschild microscope for plasma studies with sub-micrometer resolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zastrau, U.; Rodel, C.; Nakatsutsumi, M.
We present an extreme ultraviolet (EUV) microscope using a Schwarzschild objective which is optimized for single-shot sub-micrometer imaging of laser-plasma targets. The microscope has been designed and constructed for imaging the scattering from an EUV-heated solid-density hydrogen jet. Here, imaging of a cryogenic hydrogen target was demonstrated using single pulses of the free-electron laser in Hamburg (FLASH) free-electron laser at a wavelength of 13.5 nm. In a single exposure, we observe a hydrogen jet with ice fragments with a spatial resolution in the sub-micrometer range. In situ EUV imaging is expected to enable novel experimental capabilities for warm dense mattermore » studies of micrometer-sized samples in laser-plasma experiments.« less
NASA Astrophysics Data System (ADS)
Ragland, S.; Traub, W. A.; Berger, J.-P.; Danchi, W. C.; Monnier, J. D.; Willson, L. A.; Carleton, N. P.; Lacasse, M. G.; Millan-Gabet, R.; Pedretti, E.; Schloerb, F. P.; Cotton, W. D.; Townes, C. H.; Brewer, M.; Haguenauer, P.; Kern, P.; Labeye, P.; Malbet, F.; Malin, D.; Pearlman, M.; Perraut, K.; Souccar, K.; Wallace, G.
2006-11-01
We have measured nonzero closure phases for about 29% of our sample of 56 nearby asymptotic giant branch (AGB) stars, using the three-telescope Infrared Optical Telescope Array (IOTA) interferometer at near-infrared wavelengths (H band) and with angular resolutions in the range 5-10 mas. These nonzero closure phases can only be generated by asymmetric brightness distributions of the target stars or their surroundings. We discuss how these results were obtained and how they might be interpreted in terms of structures on or near the target stars. We also report measured angular sizes and hypothesize that most Mira stars would show detectable asymmetry if observed with adequate angular resolution.
SU-E-J-188: Theoretical Estimation of Margin Necessary for Markerless Motion Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, R; Block, A; Harkenrider, M
2015-06-15
Purpose: To estimate the margin necessary to adequately cover the target using markerless motion tracking (MMT) of lung lesions given the uncertainty in tracking and the size of the target. Methods: Simulations were developed in Matlab to determine the effect of tumor size and tracking uncertainty on the margin necessary to achieve adequate coverage of the target. For simplicity, the lung tumor was approximated by a circle on a 2D radiograph. The tumor was varied in size from a diameter of 0.1 − 30 mm in increments of 0.1 mm. From our previous studies using dual energy markerless motion tracking,more » we estimated tracking uncertainties in x and y to have a standard deviation of 2 mm. A Gaussian was used to simulate the deviation between the tracked location and true target location. For each size tumor, 100,000 deviations were randomly generated, the margin necessary to achieve at least 95% coverage 95% of the time was recorded. Additional simulations were run for varying uncertainties to demonstrate the effect of the tracking accuracy on the margin size. Results: The simulations showed an inverse relationship between tumor size and margin necessary to achieve 95% coverage 95% of the time using the MMT technique. The margin decreased exponentially with target size. An increase in tracking accuracy expectedly showed a decrease in margin size as well. Conclusion: In our clinic a 5 mm expansion of the internal target volume (ITV) is used to define the planning target volume (PTV). These simulations show that for tracking accuracies in x and y better than 2 mm, the margin required is less than 5 mm. This simple simulation can provide physicians with a guideline estimation for the margin necessary for use of MMT clinically based on the accuracy of their tracking and the size of the tumor.« less
Using long ssDNA polynucleotides to amplify STRs loci in degraded DNA samples
Pérez Santángelo, Agustín; Corti Bielsa, Rodrigo M.; Sala, Andrea; Ginart, Santiago; Corach, Daniel
2017-01-01
Obtaining informative short tandem repeat (STR) profiles from degraded DNA samples is a challenging task usually undermined by locus or allele dropouts and peak-high imbalances observed in capillary electrophoresis (CE) electropherograms, especially for those markers with large amplicon sizes. We hereby show that the current STR assays may be greatly improved for the detection of genetic markers in degraded DNA samples by using long single stranded DNA polynucleotides (ssDNA polynucleotides) as surrogates for PCR primers. These long primers allow a closer annealing to the repeat sequences, thereby reducing the length of the template required for the amplification in fragmented DNA samples, while at the same time rendering amplicons of larger sizes suitable for multiplex assays. We also demonstrate that the annealing of long ssDNA polynucleotides does not need to be fully complementary in the 5’ region of the primers, thus allowing for the design of practically any long primer sequence for developing new multiplex assays. Furthermore, genotyping of intact DNA samples could also benefit from utilizing long primers since their close annealing to the target STR sequences may overcome wrong profiling generated by insertions/deletions present between the STR region and the annealing site of the primers. Additionally, long ssDNA polynucleotides might be utilized in multiplex PCR assays for other types of degraded or fragmented DNA, e.g. circulating, cell-free DNA (ccfDNA). PMID:29099837
Kashiwagi, Tom; Maxwell, Elisabeth A; Marshall, Andrea D; Christensen, Ana B
2015-01-01
Sharks and rays are increasingly being identified as high-risk species for extinction, prompting urgent assessments of their local or regional populations. Advanced genetic analyses can contribute relevant information on effective population size and connectivity among populations although acquiring sufficient regional sample sizes can be challenging. DNA is typically amplified from tissue samples which are collected by hand spears with modified biopsy punch tips. This technique is not always popular due mainly to a perception that invasive sampling might harm the rays, change their behaviour, or have a negative impact on tourism. To explore alternative methods, we evaluated the yields and PCR success of DNA template prepared from the manta ray mucus collected underwater and captured and stored on a Whatman FTA™ Elute card. The pilot study demonstrated that mucus can be effectively collected underwater using toothbrush. DNA stored on cards was found to be reliable for PCR-based population genetics studies. We successfully amplified mtDNA ND5, nuclear DNA RAG1, and microsatellite loci for all samples and confirmed sequences and genotypes being those of target species. As the yields of DNA with the tested method were low, further improvements are desirable for assays that may require larger amounts of DNA, such as population genomic studies using emerging next-gen sequencing.
Maxwell, Elisabeth A.; Marshall, Andrea D.; Christensen, Ana B.
2015-01-01
Sharks and rays are increasingly being identified as high-risk species for extinction, prompting urgent assessments of their local or regional populations. Advanced genetic analyses can contribute relevant information on effective population size and connectivity among populations although acquiring sufficient regional sample sizes can be challenging. DNA is typically amplified from tissue samples which are collected by hand spears with modified biopsy punch tips. This technique is not always popular due mainly to a perception that invasive sampling might harm the rays, change their behaviour, or have a negative impact on tourism. To explore alternative methods, we evaluated the yields and PCR success of DNA template prepared from the manta ray mucus collected underwater and captured and stored on a Whatman FTA™ Elute card. The pilot study demonstrated that mucus can be effectively collected underwater using toothbrush. DNA stored on cards was found to be reliable for PCR-based population genetics studies. We successfully amplified mtDNA ND5, nuclear DNA RAG1, and microsatellite loci for all samples and confirmed sequences and genotypes being those of target species. As the yields of DNA with the tested method were low, further improvements are desirable for assays that may require larger amounts of DNA, such as population genomic studies using emerging next-gen sequencing. PMID:26413431
2017-09-01
these groups . In the 2014/2015 year, efforts focused on securing a commitment from the United States Marine Corps to host the study. In Winter 2014...we can reach an adjusted sample size target in the 2017/2018 project year by expanding our recruitment to incorporate deploying infantry groups ...Vocabulary Test Revised. Circle Pines, MN: American Guidance Service. George, C. & Solomon , J. (2008). The caregving system: A behavioral systems approach
Laser-induced surface modification of metals and alloys in liquid argon medium
NASA Astrophysics Data System (ADS)
Kazakevich, V. S.; Kazakevich, P. V.; Yaresko, P. S.; Kamynina, D. A.
2016-08-01
Micro and nanostructuring of metals and alloys surfaces (Ti, Mo, Ni, T30K4) was considered by subnanocosecond laser radiation in stationary and dynamic mode in the liquid argon, ethanol and air. Depending of structures size on the samples surface from the energy density and the number of pulses were built. Non-periodic (NSS) and periodic (PSS) surface structures with periods about λ-λ/2 were obtained. PSS formation took place as at the target surface so at the NSS surface.
Matharu, Zimple; Daggumati, Pallavi; Wang, Ling; Dorofeeva, Tatiana S; Li, Zidong; Seker, Erkin
2017-04-19
Nanoporous gold (np-Au) electrode coatings significantly enhance the performance of electrochemical nucleic acid biosensors because of their three-dimensional nanoscale network, high electrical conductivity, facile surface functionalization, and biocompatibility. Contrary to planar electrodes, the np-Au electrodes also exhibit sensitive detection in the presence of common biofouling media due to their porous structure. However, the pore size of the nanomatrix plays a critical role in dictating the extent of biomolecular capture and transport. Small pores perform better in the case of target detection in complex samples by filtering out the large nonspecific proteins. On the other hand, larger pores increase the accessibility of target nucleic acids in the nanoporous structure, enhancing the detection limits of the sensor at the expense of more interference from biofouling molecules. Here, we report a microfabricated np-Au multiple electrode array that displays a range of electrode morphologies on the same chip for identifying feature sizes that reduce the nonspecific adsorption of proteins but facilitate the permeation of target DNA molecules into the pores. We demonstrate the utility of the electrode morphology library in studying DNA functionalization and target detection in complex biological media with a special emphasis on revealing ranges of electrode morphologies that mutually enhance the limit of detection and biofouling resilience. We expect this technique to assist in the development of high-performance biosensors for point-of-care diagnostics and facilitate studies on the electrode structure-property relationships in potential applications ranging from neural electrodes to catalysts.
NASA Astrophysics Data System (ADS)
Bijl, Piet; Reynolds, Joseph P.; Vos, Wouter K.; Hogervorst, Maarten A.; Fanning, Jonathan D.
2011-05-01
The TTP (Targeting Task Performance) metric, developed at NVESD, is the current standard US Army model to predict EO/IR Target Acquisition performance. This model however does not have a corresponding lab or field test to empirically assess the performance of a camera system. The TOD (Triangle Orientation Discrimination) method, developed at TNO in The Netherlands, provides such a measurement. In this study, we make a direct comparison between TOD performance for a range of sensors and the extensive historical US observer performance database built to develop and calibrate the TTP metric. The US perception data were collected doing an identification task by military personnel on a standard 12 target, 12 aspect tactical vehicle image set that was processed through simulated sensors for which the most fundamental sensor parameters such as blur, sampling, spatial and temporal noise were varied. In the present study, we measured TOD sensor performance using exactly the same sensors processing a set of TOD triangle test patterns. The study shows that good overall agreement is obtained when the ratio between target characteristic size and TOD test pattern size at threshold equals 6.3. Note that this number is purely based on empirical data without any intermediate modeling. The calibration of the TOD to the TTP is highly beneficial to the sensor modeling and testing community for a variety of reasons. These include: i) a connection between requirement specification and acceptance testing, and ii) a very efficient method to quickly validate or extend the TTP range prediction model to new systems and tasks.
Alcohol marketing research: the need for a new agenda.
Meier, Petra S
2011-03-01
This paper aims to contribute to a rethink of marketing research priorities to address policy makers' evidence needs in relation to alcohol marketing. Discussion paper reviewing evidence gaps identified during an appraisal of policy options to restrict alcohol marketing. Evidence requirements can be categorized as follows: (i) the size of marketing effects for the whole population and for policy-relevant population subgroups, (ii) the balance between immediate and long-term effects and the time lag, duration and cumulative build-up of effects and (iii) comparative effects of partial versus comprehensive marketing restrictions on consumption and harm. These knowledge gaps impede the appraisal and evaluation of existing and new interventions, because without understanding the size and timing of expected effects, researchers may choose inadequate time-frames, samples or sample sizes. To date, research has tended to rely on simplified models of marketing and has focused disproportionately on youth populations. The effects of cumulative exposure across multiple marketing channels, targeting of messages at certain population groups and indirect effects of advertising on consumption remain unclear. It is essential that studies into marketing effect sizes are geared towards informing policy decision-makers, anchored strongly in theory, use measures of effect that are well-justified and recognize fully the complexities of alcohol marketing efforts. © 2010 The Author, Addiction © 2010 Society for the Study of Addiction.
El-Ocla, Hosam
2006-08-01
The characteristics of a radar cross section (RCS) of partially convex targets with large sizes up to five wavelengths in free space and random media are studied. The nature of the incident wave is an important factor in remote sensing and radar detection applications. I investigate the effects of beam wave incidence on the performance of RCS, drawing on the method I used in a previous study on plane-wave incidence. A beam wave can be considered a plane wave if the target size is smaller than the beam width. Therefore, to have a beam wave with a limited spot on the target, the target size should be larger than the beam width (assuming E-wave incidence wave polarization. The effects of the target configuration, random medium parameters, and the beam width on the laser RCS and the enhancement in the radar cross section are numerically analyzed, resulting in the possibility of having some sort of control over radar detection using beam wave incidence.
Flexible cue combination in the guidance of attention in visual search
Brand, John; Oriet, Chris; Johnson, Aaron P.; Wolfe, Jeremy M.
2014-01-01
Hodsoll and Humphreys (2001) have assessed the relative contributions of stimulus-driven and user-driven knowledge on linearly- and nonlinearly separable search. However, the target feature used to determine linear separability in their task (i.e., target size) was required to locate the target. In the present work, we investigated the contributions of stimulus-driven and user-driven knowledge when a linearly- or nonlinearly-separable feature is available but not required for target identification. We asked observers to complete a series of standard color X orientation conjunction searches in which target size was either linearly- or nonlinearly separable from the size of the distractors. When guidance by color X orientation and by size information are both available, observers rely on whichever information results in the best search efficiency. This is the case irrespective of whether we provide target foreknowledge by blocking stimulus conditions, suggesting that feature information is used in both a stimulus-driven and user-driven fashion. PMID:25463553
Imaging Extended Emission-Line Regions of Obscured AGN with the Subaru Hyper Suprime-Cam Survey
NASA Astrophysics Data System (ADS)
Sun, Ai-Lei; Greene, Jenny E.; Zakamska, Nadia L.; Goulding, Andy; Strauss, Michael A.; Huang, Song; Johnson, Sean; Kawaguchi, Toshihiro; Matsuoka, Yoshiki; Marsteller, Alisabeth A.; Nagao, Tohru; Toba, Yoshiki
2018-05-01
Narrow-line regions excited by active galactic nuclei (AGN) are important for studying AGN photoionization and feedback. Their strong [O III] lines can be detected with broadband images, allowing morphological studies of these systems with large-area imaging surveys. We develop a new broad-band imaging technique to reconstruct the images of the [O III] line using the Subaru Hyper Suprime-Cam (HSC) Survey aided with spectra from the Sloan Digital Sky Survey (SDSS). The technique involves a careful subtraction of the galactic continuum to isolate emission from the [O III]λ5007 and [O III]λ4959 lines. Compared to traditional targeted observations, this technique is more efficient at covering larger samples without dedicated observational resources. We apply this technique to an SDSS spectroscopically selected sample of 300 obscured AGN at redshifts 0.1 - 0.7, uncovering extended emission-line region candidates with sizes up to tens of kpc. With the largest sample of uniformly derived narrow-line region sizes, we revisit the narrow-line region size - luminosity relation. The area and radii of the [O III] emission-line regions are strongly correlated with the AGN luminosity inferred from the mid-infrared (15 μm rest-frame) with a power-law slope of 0.62^{+0.05}_{-0.06}± 0.10 (statistical and systematic errors), consistent with previous spectroscopic findings. We discuss the implications for the physics of AGN emission-line regions and future applications of this technique, which should be useful for current and next-generation imaging surveys to study AGN photoionization and feedback with large statistical samples.
Kumar, S; Panwar, J; Vyas, A; Sharma, J; Goutham, B; Duraiswamy, P; Kulkarni, S
2011-02-01
The aim of the study was to determine if frequency of tooth cleaning varies with social group, family size, bedtime and other personal hygiene habits among school children. Target population comprised schoolchildren aged 8-16 years of Udaipur district attending public schools. A two stage cluster random sampling procedure was executed to collect the representative sample, consequently final sample size accounted to 852 children. Data were collected by means of structured questionnaires which consisted of questions related to oral hygiene habits including a few general hygiene habits, bed time, family size, family income and dental visiting habits. The results show that 30.5% of the total sample cleaned their teeth twice or more daily and there was no significant difference between the genders for tooth cleaning frequency. Logistic regression analysis revealed that older children and those having less than two siblings were more likely to clean their teeth twice a day than the younger ones and children with more than two siblings. Furthermore, frequency of tooth cleaning was significantly lower among children of parents with low level of education and less annual income as compared with those of high education and more annual income. In addition, tooth cleaning habits were more regular in children using tooth paste and regularly visiting to the dentist. This study observed that tooth cleaning is not an isolated behaviour, but is a part of multifarious pattern of various social and behavioural factors. © 2009 The Authors. Journal compilation © 2009 Blackwell Munksgaard.
Thin film surface treatments for lowering dust adhesion on Mars Rover calibration targets
NASA Astrophysics Data System (ADS)
Sabri, F.; Werhner, T.; Hoskins, J.; Schuerger, A. C.; Hobbs, A. M.; Barreto, J. A.; Britt, D.; Duran, R. A.
The current generation of calibration targets on Mars Rover serve as a color and radiometric reference for the panoramic camera. They consist of a transparent silicon-based polymer tinted with either color or grey-scale pigments and cast with a microscopically rough Lambertian surface for a diffuse reflectance pattern. This material has successfully withstood the harsh conditions existent on Mars. However, the inherent roughness of the Lambertian surface (relative to the particle size of the Martian airborne dust) and the tackiness of the polymer in the calibration targets has led to a serious dust accumulation problem. In this work, non-invasive thin film technology was successfully implemented in the design of future generation calibration targets leading to significant reduction of dust adhesion and capture. The new design consists of a μm-thick interfacial layer capped with a nm-thick optically transparent layer of pure metal. The combination of these two additional layers is effective in burying the relatively rough Lambertian surface while maintaining diffuse properties of the samples which is central to the correct operation as calibration targets. A set of these targets are scheduled for flight on the Mars Phoenix mission.
Demidov, German; Simakova, Tamara; Vnuchkova, Julia; Bragin, Anton
2016-10-22
Multiplex polymerase chain reaction (PCR) is a common enrichment technique for targeted massive parallel sequencing (MPS) protocols. MPS is widely used in biomedical research and clinical diagnostics as the fast and accurate tool for the detection of short genetic variations. However, identification of larger variations such as structure variants and copy number variations (CNV) is still being a challenge for targeted MPS. Some approaches and tools for structural variants detection were proposed, but they have limitations and often require datasets of certain type, size and expected number of amplicons affected by CNVs. In the paper, we describe novel algorithm for high-resolution germinal CNV detection in the PCR-enriched targeted sequencing data and present accompanying tool. We have developed a machine learning algorithm for the detection of large duplications and deletions in the targeted sequencing data generated with PCR-based enrichment step. We have performed verification studies and established the algorithm's sensitivity and specificity. We have compared developed tool with other available methods applicable for the described data and revealed its higher performance. We showed that our method has high specificity and sensitivity for high-resolution copy number detection in targeted sequencing data using large cohort of samples.
Hollow silica microspheres for buoyancy-assisted separation of infectious pathogens from stool.
Weigum, Shannon E; Xiang, Lichen; Osta, Erica; Li, Linying; López, Gabriel P
2016-09-30
Separation of cells and microorganisms from complex biological mixtures is a critical first step in many analytical applications ranging from clinical diagnostics to environmental monitoring for food and waterborne contaminants. Yet, existing techniques for cell separation are plagued by high reagent and/or instrumentation costs that limit their use in many remote or resource-poor settings, such as field clinics or developing countries. We developed an innovative approach to isolate infectious pathogens from biological fluids using buoyant hollow silica microspheres that function as "molecular buoys" for affinity-based target capture and separation by floatation. In this process, antibody functionalized glass microspheres are mixed with a complex biological sample, such as stool. When mixing is stopped, the target-bound, low-density microspheres float to the air/liquid surface, which simultaneously isolates and concentrates the target analytes from the sample matrix. The microspheres are highly tunable in terms of size, density, and surface functionality for targeting diverse analytes with separation times of ≤2min in viscous solutions. We have applied the molecular buoy technique for isolation of a protozoan parasite that causes diarrheal illness, Cryptosporidium, directly from stool with separation efficiencies over 90% and low non-specific binding. This low-cost method for phenotypic cell/pathogen separation from complex mixtures is expected to have widespread use in clinical diagnostics as well as basic research. Copyright © 2016 Elsevier B.V. All rights reserved.
Choi, Jane Ru; Yong, Kar Wey; Tang, Ruihua; Gong, Yan; Wen, Ting; Yang, Hui; Li, Ang; Chia, Yook Chin; Pingguan-Murphy, Belinda; Xu, Feng
2017-01-01
Paper-based devices have been broadly used for the point-of-care detection of dengue viral nucleic acids due to their simplicity, cost-effectiveness, and readily observable colorimetric readout. However, their moderate sensitivity and functionality have limited their applications. Despite the above-mentioned advantages, paper substrates are lacking in their ability to control fluid flow, in contrast to the flow control enabled by polymer substrates (e.g., agarose) with readily tunable pore size and porosity. Herein, taking the benefits from both materials, the authors propose a strategy to create a hybrid substrate by incorporating agarose into the test strip to achieve flow control for optimal biomolecule interactions. As compared to the unmodified test strip, this strategy allows sensitive detection of targets with an approximately tenfold signal improvement. Additionally, the authors showcase the potential of functionality improvement by creating multiple test zones for semi-quantification of targets, suggesting that the number of visible test zones is directly proportional to the target concentration. The authors further demonstrate the potential of their proposed strategy for clinical assessment by applying it to their prototype sample-to-result test strip to sensitively and semi-quantitatively detect dengue viral RNA from the clinical blood samples. This proposed strategy holds significant promise for detecting various targets for diverse future applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A universal TaqMan-based RT-PCR protocol for cost-efficient detection of small noncoding RNA.
Jung, Ulrike; Jiang, Xiaoou; Kaufmann, Stefan H E; Patzel, Volker
2013-12-01
Several methods for the detection of RNA have been developed over time. For small RNA detection, a stem-loop reverse primer-based protocol relying on TaqMan RT-PCR has been described. This protocol requires an individual specific TaqMan probe for each target RNA and, hence, is highly cost-intensive for experiments with small sample sizes or large numbers of different samples. We describe a universal TaqMan-based probe protocol which can be used to detect any target sequence and demonstrate its applicability for the detection of endogenous as well as artificial eukaryotic and bacterial small RNAs. While the specific and the universal probe-based protocol showed the same sensitivity, the absolute sensitivity of detection was found to be more than 100-fold lower for both than previously reported. In subsequent experiments, we found previously unknown limitations intrinsic to the method affecting its feasibility in determination of mature template RISC incorporation as well as in multiplexing. Both protocols were equally specific in discriminating between correct and incorrect small RNA targets or between mature miRNA and its unprocessed RNA precursor, indicating the stem-loop RT-primer, but not the TaqMan probe, triggers target specificity. The presented universal TaqMan-based RT-PCR protocol represents a cost-efficient method for the detection of small RNAs.
Preparation and Characterization of an Amphipathic Magnetic Nanosphere
Ji, Yongsheng; Lv, Ruihong; Xu, Zhigang; Zhao, Chuande; Zhang, Haixia
2014-01-01
The amphipathic magnetic nanospheres were synthesized using C8 and polyethylene glycol as ligands. Their morphology, structure, and composition were characterized by transmission electron microscope, Fourier transform infrared, and elementary analysis. The prepared materials presented uniform sphere with size distribution about 200 nm. The magnetic characteristics of magnetic nanomaterials were measured by vibrating sample magnetometer. The target products had a saturation magnetization value of 50 emu g−1 and superparamagnetism. The adsorption capability was also studied by static tests, and the material was applied to enrich benzenesulfonamide from calf serum. The results exhibited that the C8-PEG phase owned better adsorption capability, biocompatible property, and dispersivity in aqueous samples. PMID:24729917
[Methodological design of the National Health and Nutrition Survey 2016].
Romero-Martínez, Martín; Shamah-Levy, Teresa; Cuevas-Nasu, Lucía; Gómez-Humarán, Ignacio Méndez; Gaona-Pineda, Elsa Berenice; Gómez-Acosta, Luz María; Rivera-Dommarco, Juan Ángel; Hernández-Ávila, Mauricio
2017-01-01
Describe the design methodology of the halfway health and nutrition national survey (Ensanut-MC) 2016. The Ensanut-MC is a national probabilistic survey whose objective population are the inhabitants of private households in Mexico. The sample size was determined to make inferences on the urban and rural areas in four regions. Describes main design elements: target population, topics of study, sampling procedure, measurement procedure and logistics organization. A final sample of 9 479 completed household interviews, and a sample of 16 591 individual interviews. The response rate for households was 77.9%, and the response rate for individuals was 91.9%. The Ensanut-MC probabilistic design allows valid statistical inferences about interest parameters for Mexico´s public health and nutrition, specifically on overweight, obesity and diabetes mellitus. Updated information also supports the monitoring, updating and formulation of new policies and priority programs.
Application of an ultrasonic focusing radiator for acoustic levitation of submillimeter samples
NASA Technical Reports Server (NTRS)
Lee, M. C.
1981-01-01
An acoustic apparatus has been specifically developed to handle samples of submillimeter size in a gaseous medium. This apparatus consists of an acoustic levitation device, deployment devices for small liquid and solid samples, heat sources for sample heat treatment, acoustic alignment devices, a cooling system and data-acquisition instrumentation. The levitation device includes a spherical aluminum dish of 12 in. diameter and 0.6 in. thickness, 130 pieces of PZT transducers attached to the back side of the dish and a spherical concave reflector situated in the vicinity of the center of curvature of the dish. The three lowest operating frequencies for the focusing-radiator levitation device are 75, 105 and 163 kHz, respectively. In comparison with other levitation apparatus, it possesses a large radiation pressure and a high lateral positional stability. This apparatus can be used most advantageously in the study of droplets and spherical shell systems, for instance, for fusion target applications.
Chen, Yumin; Fritz, Ronald D; Kock, Lindsay; Garg, Dinesh; Davis, R Mark; Kasturi, Prabhakar
2018-02-01
A step-wise, 'test-all-positive-gluten' analytical methodology has been developed and verified to assess kernel-based gluten contamination (i.e., wheat, barley and rye kernels) during gluten-free (GF) oat production. It targets GF claim compliance at the serving-size level (of a pouch or approximately 40-50g). Oat groats are collected from GF oat production following a robust attribute-based sampling plan then split into 75-g subsamples, and ground. R-Biopharm R5 sandwich ELISA R7001 is used for analysis of all the first15-g portions of the ground sample. A >20-ppm result disqualifies the production lot, while a >5 to <20-ppm result triggers complete analysis of the remaining 60-g of ground sample, analyzed in 15-g portions. If all five 15-g test results are <20ppm, and their average is <10.67ppm (since a 20-ppm contaminant in 40g of oats would dilute to 10.67ppm in 75-g), the lot is passed. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Berger, Cordula; Parson, Walther
2009-06-01
The degradation state of some biological traces recovered from the crime scene requires the amplification of very short fragments to attain a useful mitochondrial (mt)DNA sequence. We have previously introduced two mini-multiplex assays that amplify 10 overlapping control region (CR) fragments in two separate multiplex PCRs, which brought successful CR consensus sequences from even highly degraded DNA extracts. This procedure requires a total of 20 sequencing reactions per sample, which is laborious and cost intensive. For only moderately degraded samples that we encounter more frequently with typical mtDNA casework material, we developed two new multiplex assays that use a subset of the mini-amplicon primers but embrace larger fragments (midis) and require only 10 sequencing reactions to build a double-stranded CR consensus sequence. We used a preceding mtDNA quantitation step by real-time PCR with two different target fragments (143 and 283 bp) that roughly correspond to the average fragment sizes of the different multiplex approaches to estimate size-dependent mtDNA quantities and to aid the choice of the appropriate PCR multiplexes with respect to quality of the results and required costs.
NASA Astrophysics Data System (ADS)
Mann, Griffin
The area that comprises the Northwest Shelf in Lea Co., New Mexico has been heavily drilled over the past half century. The main target being shallow reservoirs within the Permian section (San Andres and Grayburg Formations). With a focus shifting towards deeper horizons, there is a need for more petrophysical data pertaining to these formations, which is the focus of this study through a variety of techniques. This study involves the use of contact angle measurements, fluid imbibition tests, Mercury Injection Capillary Pressure (MICP) and log analysis to evaluate the nano-petrophysical properties of the Yeso, Abo and Cisco Formation within the Northwest Shelf area of southeast New Mexico. From contact angle measurements, all of the samples studied were found to be oil-wetting as n-decane spreads on to the rock surface much quicker than the other fluids (deionized water and API brine) tested. Imbibition tests resulted in a well-connected pore network being observed for all of the samples with the highest values of imbibition slopes being recorded for the Abo samples. MICP provided a variety of pore structure data which include porosity, pore-throat size distributions, permeability and tortuosity. The Abo samples saw the highest porosity percentages, which were above 15%, with all the other samples ranging from 4 - 7%. The majority of the pore-throat sizes for most of the samples fell within the 1 - 10 mum range. The only exceptions to this being the Paddock Member within the Yeso Formation, which saw a higher percentage of larger pores (10 - 1000mum) and one of the Cisco Formation samples, which had the majority of its pore sizes fall in the 0.1 - 1 mum range. The log analysis created log calculations and curves for cross-plot porosity and water saturation that were then used to derive a value for permeability. The porosity and permeability values were comparable with those measured from our MICP and literature values.
Clustering of quasars in SDSS-IV eBOSS: study of potential systematics and bias determination
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laurent, Pierre; Goff, Jean-Marc Le; Burtin, Etienne
2017-07-01
We study the first year of the eBOSS quasar sample in the redshift range 0.9< z <2.2 which includes 68,772 homogeneously selected quasars. We show that the main source of systematics in the evaluation of the correlation function arises from inhomogeneities in the quasar target selection, particularly related to the extinction and depth of the imaging data used for targeting. We propose a weighting scheme that mitigates these systematics. We measure the quasar correlation function and provide the most accurate measurement to date of the quasar bias in this redshift range, b {sub Q} = 2.45 ± 0.05 at z-barmore » =1.55, together with its evolution with redshift. We use this information to determine the minimum mass of the halo hosting the quasars and the characteristic halo mass, which we find to be both independent of redshift within statistical error. Using a recently-measured quasar-luminosity-function we also determine the quasar duty cycle. The size of this first year sample is insufficient to detect any luminosity dependence to quasar clustering and this issue should be further studied with the final ∼500,000 eBOSS quasar sample.« less
Optimal background matching camouflage.
Michalis, Constantine; Scott-Samuel, Nicholas E; Gibson, David P; Cuthill, Innes C
2017-07-12
Background matching is the most familiar and widespread camouflage strategy: avoiding detection by having a similar colour and pattern to the background. Optimizing background matching is straightforward in a homogeneous environment, or when the habitat has very distinct sub-types and there is divergent selection leading to polymorphism. However, most backgrounds have continuous variation in colour and texture, so what is the best solution? Not all samples of the background are likely to be equally inconspicuous, and laboratory experiments on birds and humans support this view. Theory suggests that the most probable background sample (in the statistical sense), at the size of the prey, would, on average, be the most cryptic. We present an analysis, based on realistic assumptions about low-level vision, that estimates the distribution of background colours and visual textures, and predicts the best camouflage. We present data from a field experiment that tests and supports our predictions, using artificial moth-like targets under bird predation. Additionally, we present analogous data for humans, under tightly controlled viewing conditions, searching for targets on a computer screen. These data show that, in the absence of predator learning, the best single camouflage pattern for heterogeneous backgrounds is the most probable sample. © 2017 The Authors.
Clustering of quasars in SDSS-IV eBOSS: study of potential systematics and bias determination
NASA Astrophysics Data System (ADS)
Laurent, Pierre; Eftekharzadeh, Sarah; Le Goff, Jean-Marc; Myers, Adam; Burtin, Etienne; White, Martin; Ross, Ashley J.; Tinker, Jeremy; Tojeiro, Rita; Bautista, Julian; Brinkmann, Jonathan; Comparat, Johan; Dawson, Kyle; du Mas des Bourboux, Hélion; Kneib, Jean-Paul; McGreer, Ian D.; Palanque-Delabrouille, Nathalie; Percival, Will J.; Prada, Francisco; Rossi, Graziano; Schneider, Donald P.; Weinberg, David; Yèche, Christophe; Zarrouk, Pauline; Zhao, Gong-Bo
2017-07-01
We study the first year of the eBOSS quasar sample in the redshift range 0.9
NASA Technical Reports Server (NTRS)
Wallace, William T.; Limero, Thomas F.; Gazda, Daniel B.; Macatangay, Ariel V.; Dwivedi, Prabha; Fernandez, Facundo M.
2014-01-01
In the history of manned spaceflight, environmental monitoring has relied heavily on archival sampling. For short missions, this type of sample collection was sufficient; returned samples provided a snapshot of the presence of chemical and biological contaminants in the spacecraft air and water. However, with the construction of the International Space Station (ISS) and the subsequent extension of mission durations, soon to be up to one year, the need for enhanced, real-time environmental monitoring became more pressing. The past several years have seen the implementation of several real-time monitors aboard the ISS, complemented with reduced archival sampling. The station air is currently monitored for volatile organic compounds (VOCs) using gas chromatography-differential mobility spectrometry (Air Quality Monitor [AQM]). The water on ISS is analyzed to measure total organic carbon and biocide concentrations using the Total Organic Carbon Analyzer (TOCA) and the Colorimetric Water Quality Monitoring Kit (CWQMK), respectively. The current air and water monitors provide important data, but the number and size of the different instruments makes them impractical for future exploration missions. It is apparent that there is still a need for improvements in environmental monitoring capabilities. One such improvement could be realized by modifying a single instrument to analyze both air and water. As the AQM currently provides quantitative, compound-specific information for target compounds present in air samples, and many of the compounds are also targets for water quality monitoring, this instrument provides a logical starting point to evaluate the feasibility of this approach. In this presentation, we will discuss our recent studies aimed at determining an appropriate method for introducing VOCs from water samples into the gas phase and our current work, in which an electro-thermal vaporization unit has been interfaced with the AQM to analyze target analytes at the relevant concentrations at which they are routinely detected in archival water samples from the ISS.
Open tubular lab-on-column/mass spectrometry for targeted proteomics of nanogram sample amounts.
Hustoft, Hanne Kolsrud; Vehus, Tore; Brandtzaeg, Ole Kristian; Krauss, Stefan; Greibrokk, Tyge; Wilson, Steven Ray; Lundanes, Elsa
2014-01-01
A novel open tubular nanoproteomic platform featuring accelerated on-line protein digestion and high-resolution nano liquid chromatography mass spectrometry (LC-MS) has been developed. The platform features very narrow open tubular columns, and is hence particularly suited for limited sample amounts. For enzymatic digestion of proteins, samples are passed through a 20 µm inner diameter (ID) trypsin + endoproteinase Lys-C immobilized open tubular enzyme reactor (OTER). Resulting peptides are subsequently trapped on a monolithic pre-column and transferred on-line to a 10 µm ID porous layer open tubular (PLOT) liquid chromatography LC separation column. Wnt/ß-catenein signaling pathway (Wnt-pathway) proteins of potentially diagnostic value were digested+detected in targeted-MS/MS mode in small cell samples and tumor tissues within 120 minutes. For example, a potential biomarker Axin1 was identifiable in just 10 ng of sample (protein extract of ∼1,000 HCT15 colon cancer cells). In comprehensive mode, the current OTER-PLOT set-up could be used to identify approximately 1500 proteins in HCT15 cells using a relatively short digestion+detection cycle (240 minutes), outperforming previously reported on-line digestion/separation systems. The platform is fully automated utilizing common commercial instrumentation and parts, while the reactor and columns are simple to produce and have low carry-over. These initial results point to automated solutions for fast and very sensitive MS based proteomics, especially for samples of limited size.
On-Chip, Amplification-Free Quantification of Nucleic Acid for Point-of-Care Diagnosis
NASA Astrophysics Data System (ADS)
Yen, Tony Minghung
This dissertation demonstrates three physical device concepts to overcome limitations in point-of-care quantification of nucleic acids. Enabling sensitive, high throughput nucleic acid quantification on a chip, outside of hospital and centralized laboratory setting, is crucial for improving pathogen detection and cancer diagnosis and prognosis. Among existing platforms, microarray have the advantages of being amplification free, low instrument cost, and high throughput, but are generally less sensitive compared to sequencing and PCR assays. To bridge this performance gap, this dissertation presents theoretical and experimental progress to develop a platform nucleic acid quantification technology that is drastically more sensitive than current microarrays while compatible with microarray architecture. The first device concept explores on-chip nucleic acid enrichment by natural evaporation of nucleic acid solution droplet. Using a micro-patterned super-hydrophobic black silicon array device, evaporative enrichment is coupled with nano-liter droplet self-assembly workflow to produce a 50 aM concentration sensitivity, 6 orders of dynamic range, and rapid hybridization time at under 5 minutes. The second device concept focuses on improving target copy number sensitivity, instead of concentration sensitivity. A comprehensive microarray physical model taking into account of molecular transport, electrostatic intermolecular interactions, and reaction kinetics is considered to guide device optimization. Device pattern size and target copy number are optimized based on model prediction to achieve maximal hybridization efficiency. At a 100-mum pattern size, a quantum leap in detection limit of 570 copies is achieved using black silicon array device with self-assembled pico-liter droplet workflow. Despite its merits, evaporative enrichment on black silicon device suffers from coffee-ring effect at 100-mum pattern size, and thus not compatible with clinical patient samples. The third device concept utilizes an integrated optomechanical laser system and a Cytop microarray device to reverse coffee-ring effect during evaporative enrichment at 100-mum pattern size. This method, named "laser-induced differential evaporation" is expected to enable 570 copies detection limit for clinical samples in near future. While the work is ongoing as of the writing of this dissertation, a clear research plan is in place to implement this method on microarray platform toward clinical sample testing for disease applications and future commercialization.
Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin
2017-06-01
A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.
NASA Astrophysics Data System (ADS)
Callahan, John H.; Galicia, Marsha C.; Vertes, Akos
2002-09-01
Laser evaporation techniques, including matrix-assisted pulsed laser evaporation (MAPLE), are attracting increasing attention due to their ability to deposit thin layers of undegraded synthetic and biopolymers. Laser evaporation methods can be implemented in reflection geometry with the laser and the substrate positioned on the same side of the target. In some applications (e.g. direct write, DW), however, transmission geometry is used, i.e. the thin target is placed between the laser and the substrate. In this case, the laser pulse perforates the target and transfers some target material to the substrate. In order to optimize evaporation processes it is important to know the composition of the target plume and the material deposited from the plume. We used a recently introduced analytical method, atmospheric pressure matrix-assisted laser desorption ionization (AP-MALDI) to characterize the ionic components of the plume both in reflection and in transmission geometry. This technique can also be used to directly probe materials deposited on surfaces (such as glass slides) by laser evaporation methods. The test compound (small peptides, e.g. Angiotensin I, ATI or Substance P) was mixed with a MALDI matrix (α-cyano-4-hydroxycinnamic acid (CHCA), sinapinic acid (SA) or 2,5-dihydroxybenzoic acid (DHB)) and applied to the stainless steel (reflection geometry) or transparent conducting (transmission geometry) target holder. In addition to the classical dried droplet method, we also used electrospray target deposition to gain better control of crystallite size, thickness and homogeneity. The target was mounted in front of the inlet orifice of an ion trap mass spectrometer (IT-MS) that sampled the ionic components of the plume generated by a nitrogen laser. We studied the effect of several parameters, such as, the orifice to target distance, illumination geometry, extracting voltage distribution and sample preparation on the generated ions. Various analyte-matrix and matrix-matrix cluster ions were observed with relatively low abundance of the matrix ions.
SAMURAI: Polar AUV-Based Autonomous Dexterous Sampling
NASA Astrophysics Data System (ADS)
Akin, D. L.; Roberts, B. J.; Smith, W.; Roderick, S.; Reves-Sohn, R.; Singh, H.
2006-12-01
While autonomous undersea vehicles are increasingly being used for surveying and mapping missions, as of yet there has been little concerted effort to create a system capable of performing physical sampling or other manipulation of the local environment. This type of activity has typically been performed under teleoperated control from ROVs, which provides high-bandwidth real-time human direction of the manipulation activities. Manipulation from an AUV will require a completely autonomous sampling system, which implies both advanced technologies such as machine vision and autonomous target designation, but also dexterous robot manipulators to perform the actual sampling without human intervention. As part of the NASA Astrobiology Science and Technology for Exploring the Planets (ASTEP) program, the University of Maryland Space Systems Laboratory has been adapting and extending robotics technologies developed for spacecraft assembly and maintenance to the problem of autonomous sampling of biologicals and soil samples around hydrothermal vents. The Sub-polar ice Advanced Manipulator for Universal Sampling and Autonomous Intervention (SAMURAI) system is comprised of a 6000-meter capable six-degree-of-freedom dexterous manipulator, along with an autonomous vision system, multi-level control system, and sampling end effectors and storage mechanisms to allow collection of samples from vent fields. SAMURAI will be integrated onto the Woods Hole Oceanographic Institute (WHOI) Jaguar AUV, and used in Arctic during the fall of 2007 for autonomous vent field sampling on the Gakkel Ridge. Under the current operations concept, the JAGUAR and PUMA AUVs will survey the water column and localize on hydrothermal vents. Early mapping missions will create photomosaics of the vents and local surroundings, allowing scientists on the mission to designate desirable sampling targets. Based on physical characteristics such as size, shape, and coloration, the targets will be loaded into the SAMURAI control system, and JAGUAR (with SAMURAI mounted to the lower forward hull) will return to the designated target areas. Once on site, vehicle control will be turned over to the SAMURAI controller, which will perform vision-based guidance to the sampling site and will then ground the AUV to the sea bottom for stability. The SAMURAI manipulator will collect samples, such as sessile biologicals, geological samples, and (potentially) vent fluids, and store the samples for the return trip. After several hours of sampling operations on one or several sites, JAGUAR control will be returned to the WHOI onboard controller for the return to the support ship. (Operational details of AUV operations on the Gakkel Ridge mission are presented in other papers at this conference.) Between sorties, SAMURAI end effectors can be changed out on the surface for specific targets, such as push cores or larger biologicals such as tube worms. In addition to the obvious challenges in autonomous vision-based manipulator control from a free-flying support vehicle, significant development challenges have been the design of a highly capable robotic arm within the mass limitations (both wet and dry) of the JAGUAR vehicle, the development of a highly robust manipulator with modular maintenance units for extended polar operations, and the creation of a robot-based sample collection and holding system for multiple heterogeneous samples on a single extended sortie.
Cytotoxicity and cellular uptake of different sized gold nanoparticles in ovarian cancer cells
NASA Astrophysics Data System (ADS)
Kumar, Dhiraj; Mutreja, Isha; Chitcholtan, Kenny; Sykes, Peter
2017-11-01
Nanomedicine has advanced the biomedical field with the availability of multifunctional nanoparticles (NPs) systems that can target a disease site enabling drug delivery and helping to monitor the disease. In this paper, we synthesised the gold nanoparticles (AuNPs) with an average size 18, 40, 60 and 80 nm, and studied the effect of nanoparticles size, concentration and incubation time on ovarian cancer cells namely, OVCAR5, OVCAR8, and SKOV3. The size measured by transmission electron microscopy images was slightly smaller than the hydrodynamic diameter; measured size by ImageJ as 14.55, 38.13, 56.88 and 78.56 nm. The cellular uptake was significantly controlled by the AuNPs size, concentration, and the cell type. The nanoparticles uptake increased with increasing concentration, and 18 and 80 nm AuNPs showed higher uptake ranging from 1.3 to 5.4 μg depending upon the concentration and cell type. The AuNPs were associated with a temporary reduction in metabolic activity, but metabolic activity remained more than 60% for all sample types; NPs significantly affected the cell proliferation activity in first 12 h. The increase in nanoparticle size and concentration induced the production of reactive oxygen species in 24 h.
NASA Astrophysics Data System (ADS)
Nakamura, Akiko M.; Yamane, Fumiya; Okamoto, Takaya; Takasawa, Susumu
2015-03-01
The outcome of collision between small solid bodies is characterized by the threshold energy density Q*s, the specific energy to shatter, that is defined as the ratio of projectile kinetic energy to the target mass (or the sum of target and projectile) needed to produce the largest intact fragment that contains one half the target mass. It is indicated theoretically and by numerical simulations that the disruption threshold Q*s decreases with target size in strength-dominated regime. The tendency was confirmed by laboratory impact experiments using non-porous rock targets (Housen and Holsapple, 1999; Nagaoka et al., 2014). In this study, we performed low-velocity impact disruption experiments on porous gypsum targets with porosity of 65-69% and of three different sizes to examine the size dependence of the disruption threshold for porous material. The gypsum specimens were shown to have a weaker volume dependence on static tensile strength than do the non-porous rocks. The disruption threshold had also a weaker dependence on size scale as Q*s ∝D-γ , γ ≤ 0.25 - 0.26, while the previous laboratory studies showed γ=0.40 for the non-porous rocks. The measurements at low-velocity lead to a value of about 100 J kg-1 for Q*s which is roughly one order of magnitude lower than the value of Q*s for the gypsum targets of 65% porosity but impacted by projectiles with higher velocities. Such a clear dependence on the impact velocity was also shown by previous studies of gypsum targets with porosity of 50%.
In situ single cell detection via microfluidic magnetic bead assay
KC, Pawan; Zhang, Ge; Zhe, Jiang
2017-01-01
We present a single cell detection device based on magnetic bead assay and micro Coulter counters. This device consists of two successive micro Coulter counters, coupled with a high gradient magnetic field generated by an external magnet. The device can identify single cells in terms of the transit time difference of the cell through the two micro Coulter counters. Target cells are conjugated with magnetic beads via specific antibody and antigen binding. A target cell traveling through the two Coulter counters interacts with the magnetic field, and have a longer transit time at the 1st counter than that at the 2nd counter. In comparison, a non-target cell has no interaction with the magnetic field, and hence has nearly the same transit times through the two counters. Each cell passing through the two counters generates two consecutive voltage pulses one after the other; the pulse widths and magnitudes indicating the cell’s transit times through the counters and the cell’s size respectively. Thus, by measuring the pulse widths (transit times) of each cell through the two counters, each single target cell can be differentiated from non-target cells even if they have similar sizes. We experimentally proved that the target human umbilical vein endothelial cells (HUVECs) and non-target rat adipose-derived stem cells (rASCs) have significant different transit time distribution, from which we can determine the recognition regions for both cell groups quantitatively. We further demonstrated that within a mixed cell population of rASCs and HUVECs, HUVECs can be detected in situ and the measured HUVECs ratios agree well with the pre-set ratios. With the simple device structure and easy sample preparation, this method is expected to enable single cell detection in a continuous flow and can be applied to facilitate general cell detection applications such as stem cell identification and enumeration. PMID:28222140
In situ single cell detection via microfluidic magnetic bead assay.
Liu, Fan; Kc, Pawan; Zhang, Ge; Zhe, Jiang
2017-01-01
We present a single cell detection device based on magnetic bead assay and micro Coulter counters. This device consists of two successive micro Coulter counters, coupled with a high gradient magnetic field generated by an external magnet. The device can identify single cells in terms of the transit time difference of the cell through the two micro Coulter counters. Target cells are conjugated with magnetic beads via specific antibody and antigen binding. A target cell traveling through the two Coulter counters interacts with the magnetic field, and have a longer transit time at the 1st counter than that at the 2nd counter. In comparison, a non-target cell has no interaction with the magnetic field, and hence has nearly the same transit times through the two counters. Each cell passing through the two counters generates two consecutive voltage pulses one after the other; the pulse widths and magnitudes indicating the cell's transit times through the counters and the cell's size respectively. Thus, by measuring the pulse widths (transit times) of each cell through the two counters, each single target cell can be differentiated from non-target cells even if they have similar sizes. We experimentally proved that the target human umbilical vein endothelial cells (HUVECs) and non-target rat adipose-derived stem cells (rASCs) have significant different transit time distribution, from which we can determine the recognition regions for both cell groups quantitatively. We further demonstrated that within a mixed cell population of rASCs and HUVECs, HUVECs can be detected in situ and the measured HUVECs ratios agree well with the pre-set ratios. With the simple device structure and easy sample preparation, this method is expected to enable single cell detection in a continuous flow and can be applied to facilitate general cell detection applications such as stem cell identification and enumeration.
Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas
2014-01-01
Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357
Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas
2014-01-01
The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.
Plewan, Thorsten; Rinkenauer, Gerhard
2016-01-01
Reaction time (RT) can strongly be influenced by a number of stimulus properties. For instance, there was converging evidence that perceived size rather than physical (i.e., retinal) size constitutes a major determinant of RT. However, this view has recently been challenged since within a virtual three-dimensional (3D) environment retinal size modulation failed to influence RT. In order to further investigate this issue in the present experiments response force (RF) was recorded as a supplemental measure of response activation in simple reaction tasks. In two separate experiments participants’ task was to react as fast as possible to the occurrence of a target located close to the observer or farther away while the offset between target locations was increased from Experiment 1 to Experiment 2. At the same time perceived target size (by varying the retinal size across depth planes) and target type (sphere vs. soccer ball) were modulated. Both experiments revealed faster and more forceful reactions when targets were presented closer to the observers. Perceived size and target type barely affected RT and RF in Experiment 1 but differentially affected both variables in Experiment 2. Thus, the present findings emphasize the usefulness of RF as a supplement to conventional RT measurement. On a behavioral level the results confirm that (at least) within virtual 3D space perceived object size neither strongly influences RT nor RF. Rather the relative position within egocentric (body-centered) space presumably indicates an object’s behavioral relevance and consequently constitutes an important modulator of visual processing. PMID:28018273
Sampling Strategy and Curation Plan of "Hayabusa" Asteroid Sample Return Mission
NASA Technical Reports Server (NTRS)
Yano, H.; Fujiwara, A.; Abe, M.; Hasegawa, S.; Kushiro, I.; Zolensky, M. E.
2004-01-01
On the 9th May 2003 JST, Japanese spacecraft MUSES-C was successfully launched from Uchinoura. The spacecraft was directly inserted to interplanetary trajectory and renamed as Hayabusa , or "Falcon" to be the world s first sample return spacecraft to a near Earth asteroid (NEA). The NEA (25143)Itokawa (formerly known as "1998SF36") is its mission target. Its orbital and physical characteristics were well observed; the size is (490 +/- 100)x (250 +/- 55)x(180 +/- 50) m with about 12-hour rotation period. It has a red-sloped S(IV)-type spectrum with strong 1- and 2-micron absorption bands, analogous to ordinary LL chondrites with space weathering effect. Assuming its bulk density, the surface gravity level of Itokawa is in the order of 10 micro-G with its escape velocity = approx. 20 cm/s.
Nickel speciation in several serpentine (ultramafic) topsoils via bulk synchrotron-based techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siebecker, Matthew G.; Chaney, Rufus L.; Sparks, Donald L.
2017-07-01
Serpentine soils have elevated concentrations of trace metals including nickel, cobalt, and chromium compared to non-serpentine soils. Identifying the nickel bearing minerals allows for prediction of potential mobility of nickel. Synchrotron-based techniques can identify the solid-phase chemical forms of nickel with minimal sample treatment. Element concentrations are known to vary among soil particle sizes in serpentine soils. Sonication is a useful method to physically disperse sand, silt and clay particles in soils. Synchrotron-based techniques and sonication were employed to identify nickel species in discrete particle size fractions in several serpentine (ultramafic) topsoils to better understand solid-phase nickel geochemistry. Nickel commonlymore » resided in primary serpentine parent material such as layered-phyllosilicate and chain-inosilicate minerals and was associated with iron oxides. In the clay fractions, nickel was associated with iron oxides and primary serpentine minerals, such as lizardite. Linear combination fitting (LCF) was used to characterize nickel species. Total metal concentration did not correlate with nickel speciation and is not an indicator of the major nickel species in the soil. Differences in soil texture were related to different nickel speciation for several particle size fractionated samples. A discussion on LCF illustrates the importance of choosing standards based not only on statistical methods such as Target Transformation but also on sample mineralogy and particle size. Results from the F-test (Hamilton test), which is an underutilized tool in the literature for LCF in soils, highlight its usefulness to determine the appropriate number of standards to for LCF. EXAFS shell fitting illustrates that destructive interference commonly found for light and heavy elements in layered double hydroxides and in phyllosilicates also can occur in inosilicate minerals, causing similar structural features and leading to false positive results in LCF.« less
Guo, Jiin-Huarng; Luh, Wei-Ming
2009-05-01
When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.
miR-543 promotes gastric cancer cell proliferation by targeting SIRT1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Juan; Dong, Guoying; Wang, Bo
SIRT1, a class III histone deacetylase, exerts inhibitory effects on tumorigenesis and is downregulated in gastric cancer. However, the role of microRNAs in the regulation of SIRT1 in gastric cancer is still largely unknown. Here, we identified miR-543 as a predicted upstream regulator of SIRT1 using 3 different bioinformatics databases. Mimics of miR-543 significantly inhibited the expression of SIRT1, whereas an inhibitor of miR-543 increased SIRT1 expression. MiR-543 directly targeted the 3′-UTR of SIRT1, and both of the two binding sites contributed to the inhibitory effects. In gastric epithelium-derived cell lines, miR-543 promoted cell proliferation and cell cycle progression, andmore » overexpression of SIRT1 rescued the above effects of miR-543. The inhibitory effects of miR-543 on SIRT1 were also validated using clinical gastric cancer samples. Moreover, we found that miR-543 expression was positively associated with tumor size, clinical grade, TNM stage and lymph node metastasis in gastric cancer patients. Our results identify a new regulatory mechanism of miR-543 on SIRT1 expression in gastric cancer, and raise the possibility that the miR-543/SIRT1 pathway may serve as a potential target for the treatment of gastric cancer. - Highlights: • SIRT1 is a novel target of miR-543. • miR-543 promotes gastric cancer cell proliferation and cell cycle progression by targeting SIRT1. • miR-543 is upregulated in GC and positively associated with tumor size, clinical grade, TNM stage and lymph node metastasis. • miR-543 is negatively correlated with SIRT1 expression in gastric cancer tissues.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorhout, Jacquelyn Marie
This dissertation covers several distinct projects relating to the fields of nuclear forensics and basic actinide science. Post-detonation nuclear forensics, in particular, the study of fission products resulting from a nuclear device to determine device attributes and information, often depends on the comparison of fission products to a library of known ratios. The expansion of this library is imperative as technology advances. Rapid separation of fission products from a target material, without the need to dissolve the target, is an important technique to develop to improve the library and provide a means to develop samples and standards for testing separations.more » Several materials were studied as a proof-of-concept that fission products can be extracted from a solid target, including microparticulate (< 10 μm diameter) dUO 2, porous metal organic frameworks (MOFs) synthesized from depleted uranium (dU), and other organicbased frameworks containing dU. The targets were irradiated with fast neutrons from one of two different neutron sources, contacted with dilute acids to facilitate the separation of fission products, and analyzed via gamma spectroscopy for separation yields. The results indicate that smaller particle sizes of dUO 2 in contact with the secondary matrix KBr yield higher separation yields than particles without a secondary matrix. It was also discovered that using 0.1 M HNO 3 as a contact acid leads to the dissolution of the target material. Lower concentrations of acid were used for future experiments. In the case of the MOFs, a larger pore size in the framework leads to higher separation yields when contacted with 0.01 M HNO 3. Different types of frameworks also yield different results.« less
Development of high flux thermal neutron generator for neutron activation analysis
NASA Astrophysics Data System (ADS)
Vainionpaa, Jaakko H.; Chen, Allan X.; Piestrup, Melvin A.; Gary, Charles K.; Jones, Glenn; Pantell, Richard H.
2015-05-01
The new model DD110MB neutron generator from Adelphi Technology produces thermal (<0.5 eV) neutron flux that is normally achieved in a nuclear reactor or larger accelerator based systems. Thermal neutron fluxes of 3-5 · 107 n/cm2/s are measured. This flux is achieved using four ion beams arranged concentrically around a target chamber containing a compact moderator with a central sample cylinder. Fast neutron yield of ∼2 · 1010 n/s is created at the titanium surface of the target chamber. The thickness and material of the moderator is selected to maximize the thermal neutron flux at the center. The 2.5 MeV neutrons are quickly thermalized to energies below 0.5 eV and concentrated at the sample cylinder. The maximum flux of thermal neutrons at the target is achieved when approximately half of the neutrons at the sample area are thermalized. In this paper we present simulation results used to characterize performance of the neutron generator. The neutron flux can be used for neutron activation analysis (NAA) prompt gamma neutron activation analysis (PGNAA) for determining the concentrations of elements in many materials. Another envisioned use of the generator is production of radioactive isotopes. DD110MB is small enough for modest-sized laboratories and universities. Compared to nuclear reactors the DD110MB produces comparable thermal flux but provides reduced administrative and safety requirements and it can be run in pulsed mode, which is beneficial in many neutron activation techniques.
NASA Astrophysics Data System (ADS)
Khattabi, Areen M.; Alqdeimat, Diala A.
2018-02-01
One of the problems in the use of nanoparticles (NPs) as carriers in drug delivery systems is their agglomeration which mainly appears due to their high surface energy. This results in formation of NPs with different sizes leading to differences in their distribution and bioavailability. The surface coating of NPs with certain compounds can be used to prevent or minimize this problem. In this study, the effect of cyclodextrin (CD) on the agglomeration state and hence on the in vitro characteristics of drug loaded and targeted silica NPs was investigated. A sample of NPs was loaded with anticancer agents, then modified with a long polymer, carboxymethyl-β-cyclodextrin (CM-β-CD) and folic acid (FA), respectively. Another sample was modified similarly but without CD. The surface modification was characterized using fourier transform infrared spectroscopy (FT-IR). The polydispersity (PD) was measured using dynamic light scattering (DLS) and was found to be smaller for CD modified NPs. The results of the in vitro drug release showed that the release rate from both samples exhibited similar pattern for the first 5 hours, however the rate was faster from CD modified NPs after 24 hours. The in vitro cell viability assay confirmed that CD modified NPs were about 30% more toxic to HeLa cells. These findings suggest that CD has a clear effect in minimizing the agglomeration of such modified silica NPs, accelerating their drug release rate and enhancing their targeting effect.
NASA Astrophysics Data System (ADS)
Hubbard, K.; Bruzek, S.
2016-02-01
The globally distributed marine diatom genus Pseudo-nitzschia consists of approximately 40 species, more than half of which occur in US coastal waters. Here, sensitive genetic tools targeting a variable portion of the internal transcribed spacer 1 (ITS1) region of the rRNA gene were used to assess Pseudo-nitzschia spp. diversity in more than 600 environmental DNA samples collected from US Atlantic, Pacific, and Gulf of Mexico waters. Community-based approaches employed genus-specific primers for environmental DNA fingerprinting and targeted sequencing. For the Gulf of Mexico samples especially, a nested PCR approach (with or without degenerate primers) improved resolution of species diversity. To date, more than 40 unique ITS1 amplicon sizes have been repeatedly observed in ITS1 fingerprints. Targeted sequencing of environmental DNA as well as single chains isolated from live samples indicate that many of these represent novel and known inter- and intra-specific Pseudo-nitzschia diversity. A few species (e.g., P. pungens, P. cuspidata) occur across all three regions, whereas other species and intraspecific variants occurred at local to regional spatial scales only. Generally, species frequently co-occur in complex assemblages, and transitions in Pseudo-nitzschia community composition occur seasonally, prior to bloom initiation, and across (cross-shelf, latitudinal, and vertical) environmental gradients. These observations highlight the dynamic nature of diatom community composition in the marine environment and the importance of classifying diversity at relevant ecological and/or taxonomic scales.
Feil, A; Thoden van Velzen, E U; Jansen, M; Vitz, P; Go, N; Pretz, T
2016-02-01
The recovery of beverage cartons (BC) in three lightweight packaging waste processing plants (LP) was analyzed with different input materials and input masses in the area of 21-50Mg. The data was generated by gravimetric determination of the sorting products, sampling and sorting analysis. Since the particle size of beverage cartons is larger than 120mm, a modified sampling plan was implemented and targeted multiple sampling (3-11 individual samplings) and a total sample size of respectively 1200l (ca. 60kg) for the BC-products and of about 2400l (ca. 120kg) for material-heterogeneous mixed plastics (MP) and sorting residue products. The results infer that the quantification of the beverage carton yield in the process, i.e., by including all product-containing material streams, can be specified only with considerable fluctuation ranges. Consequently, the total assessment, regarding all product streams, is rather qualitative than quantitative. Irregular operation conditions as well as unfavorable sampling conditions and capacity overloads are likely causes for high confidence intervals. From the results of the current study, recommendations can basically be derived for a better sampling in LP-processing plants. Despite of the suboptimal statistical results, the results indicate very clear that the plants show definite optimisation potentials with regard to the yield of beverage cartons as well as the required product purity. Due to the test character of the sorting trials the plant parameterization was not ideal for this sorting task and consequently the results should be interpreted with care. Copyright © 2015 Elsevier Ltd. All rights reserved.
Pezzoli, Lorenzo; Andrews, Nick; Ronveaux, Olivier
2010-05-01
Vaccination programmes targeting disease elimination aim to achieve very high coverage levels (e.g. 95%). We calculated the precision of different clustered lot quality assurance sampling (LQAS) designs in computer-simulated surveys to provide local health officers in the field with preset LQAS plans to simply and rapidly assess programmes with high coverage targets. We calculated sample size (N), decision value (d) and misclassification errors (alpha and beta) of several LQAS plans by running 10 000 simulations. We kept the upper coverage threshold (UT) at 90% or 95% and decreased the lower threshold (LT) progressively by 5%. We measured the proportion of simulations with < or =d individuals unvaccinated or lower if the coverage was set at the UT (pUT) to calculate beta (1-pUT) and the proportion of simulations with >d unvaccinated individuals if the coverage was LT% (pLT) to calculate alpha (1-pLT). We divided N in clusters (between 5 and 10) and recalculated the errors hypothesising that the coverage would vary in the clusters according to a binomial distribution with preset standard deviations of 0.05 and 0.1 from the mean lot coverage. We selected the plans fulfilling these criteria: alpha < or = 5% beta < or = 20% in the unclustered design; alpha < or = 10% beta < or = 25% when the lots were divided in five clusters. When the interval between UT and LT was larger than 10% (e.g. 15%), we were able to select precise LQAS plans dividing the lot in five clusters with N = 50 (5 x 10) and d = 4 to evaluate programmes with 95% coverage target and d = 7 to evaluate programmes with 90% target. These plans will considerably increase the feasibility and the rapidity of conducting the LQAS in the field.
USDA-ARS?s Scientific Manuscript database
Stable fly management has been challenging. Insecticide-treated targets made from blue and black fabric, developed in Africa, were evaluated in Louisiana and Florida to determine if they would attract and kill stable flies. Untreated targets were used to answer questions about configuration, size an...
Mapping Invasive Plant Species with a Combination of Field and Remote Sensing Data
NASA Astrophysics Data System (ADS)
Skowronek, S.; Feilhauer, H.; Van De Kerchove, R.; Ewald, M.; Aerts, R.; Somers, B.; Warrie, J.; Kempeneers, P.; Lenoir, J.; Honnay, O.; Asner, G. P.; Schmidtlein, S.; Hattab, T.; Rocchini, D.
2015-12-01
Advanced hyperspectral and LIDAR data offer a great potential to map and monitor invasive plant species and their impact on ecosystems. These species are often difficult to detect over large areas with traditional mapping approaches. One challenge is the combination of the remote sensing data with the field data for calibration and validation. Therefore, our goals were to (1) develop an approach that allows to efficiently map species invasions based on presence-only data of the target species and remote sensing data; and (2) use this approach to create distribution maps for invasive plant species in two study areas in western Europe, which offer the basis for further analysis of the impact of invasions and to infer possible management options. For this purpose, on the island of Sylt in Northern Germany, we collected vegetation data on 120 plots with a size of 3 m x 3 m with different cover fractions of two invasive plant species; the moss Campylopus introflexus and the shrub Rosa rugosa. In the forest of Compiègne in Northern France, we sampled a total of 50 plots with a size of 25 x 25 m, targeting the invasive tree Prunus serotina. In both study areas, independent validation datasets containing presence and absence points of the target species were collected. Airborne hyperspectral data (APEX), which were simultaneously acquired for both study areas in summer 2014, provided 285 spectral bands covering the visible, near infrared and short-wave infrared region with a pixel size of 1.8 and 3 m. First results showed that mapping using one-class classifiers is possible: For C. introflexus, AUC value was 0.89 and OAC 0.72, for R. rugosa., AUC was 0.93 and OAC 0.92. However, for both species, a few areas were mapped incorrectly. Possible explanations are the different appearances of the target species in different biotope types underrepresented in the calibration data, and a high cover of species with similar reflectance properties.
Affective context interferes with cognitive control in unipolar depression: An fMRI investigation
Dichter, Gabriel S.; Felder, Jennifer N.; Smoski, Moria J.
2009-01-01
Background Unipolar major depressive disorder (MDD) is characterized by aberrant amygdala responses to sad stimuli and poor cognitive control, but the interactive effects of these impairments are poorly understood. Aim To evaluate brain activation in MDD in response to cognitive control stimuli embedded within sad and neutral contexts. Method Fourteen adults with MDD and fifteen matched controls participated in a mixed block/event-related functional magnetic resonance imaging (fMRI) task that presented oddball target stimuli embedded within blocks of sad or neutral images. Results Target events activated similar prefrontal brain regions in both groups. However, responses to target events embedded within blocks of emotional images revealed a clear group dissociation. During neutral blocks, the control group demonstrated greater activation to targets in the midfrontal gyrus and anterior cingulate relative to the MDD group, replicating previous findings of prefrontal hypo-activation in MDD samples to cognitive control stimuli. However, during sad blocks, the MDD group demonstrated greater activation in a number of prefrontal regions, including the mid-, inferior, and orbito-frontal gyri and the anterior cingulate, suggesting that relatively more prefrontal brain activation was required to disengage from the sad images to respond to the target events. Limitations A larger sample size would have provided greater statistical power, and more standardized stimuli would have increased external validity. Conclusions This double dissociation of prefrontal responses to target events embedded within neutral and sad context suggests that MDD impacts not only responses to affective events, but extends to other cognitive processes carried out in the context of affective engagement. This implies that emotional reactivity to sad events in MDD may impact functioning more broadly than previously understood. PMID:18706701
NASA Astrophysics Data System (ADS)
Yasui, M.; Arakawa, M.
2011-12-01
Most of asteroids are expected to be impact fragments produced by collisions among planetesimals or rubble-pile bodies produced by re-accumulation of fragments. In order to study the formation processes of asteroids, it is necessary to examine the collisional disruption and re-accumulation conditions of planetesimals. Most of meteorites recovered on the Earth are ordinary chondrites (OCs). The OCs have the components of millimeter-sized round grains (chondrules) and submicron-sized dusts (matrix). So, the planetesimals forming the parent bodies of OCs could be mainly composed of chondrules and matrix. Therefore, we conducted impact experiments with porous gypsum mixed with glass beads having the spherical shape with various diameters simulating chondrules, and examined the effect of chondrules on the ejecta velocity and the impact strength. The targets included glass beads with a diameter ranging from 100 μm to 3 mm and the volume fraction was 0.6, similar to that of ordinary chondrites, which is about 0.65-0.75. We also prepared the porous gypsum sample without glass bead to examine the effect of volume fraction. Nylon projectiles with the diameters of 10 mm and 2 mm were impacted at 60-180 m/s by a single-stage gas gun and at about 4 km/s by a two-stage light gas gun, respectively. After the shot, we measured the mass of the recovered fragments to calculate the impact strength Q defined by Q=mpVi^2/2(mp+Mt), where Vi is the impact velocity, and mp and Mt are the mass of projectile and target, respectively. The collisional disruption of the target was observed by a high-speed video camera to measure the ejecta velocity. The antipodal velocity Va increased with the increase of Q, irrespective of glass bead size and volume fraction. However, the Va for low-velocity collisions at 60-180 m/s was an order magnitude larger than that for high-velocity collisions at 4 km/s. The velocities of fragments ejected from two corners on the impact surface of the target Vc-g measured in the center of the mass system, were independent on the target materials. The impact strength of the mixture target was found to range from 56 to 116 J/kg depending on the glass bead size, and was several times smaller than that of the gypsum target, 446 J/kg in low-velocity collisions. The impact strengths of the 100 μm bead target and the gypsum target strongly depended on the impact velocity: those obtained in high-velocity collisions were several times greater than those obtained in low-velocity collisions. The obtained results of Vc-g were compared to the escape velocity of chondrule-including planetesimals (CiPs) to study the conditions for the formation of rubble-pile bodies after the catastrophic disruption. The fragments of CiPs for catastrophic disruption could be re-accumulated at the radius of a body larger than 3 km, irrespective of chondrule size included in the CiPs, which is rather smaller than that for basalt bodies. Thus, we suggested that there were more parent bodies of OCs having a rubble-pile structure.
Douglass, John K; Wehling, Martin F
2016-12-01
A highly automated goniometer instrument (called FACETS) has been developed to facilitate rapid mapping of compound eye parameters for investigating regional visual field specializations. The instrument demonstrates the feasibility of analyzing the complete field of view of an insect eye in a fraction of the time required if using non-motorized, non-computerized methods. Faster eye mapping makes it practical for the first time to employ sample sizes appropriate for testing hypotheses about the visual significance of interspecific differences in regional specializations. Example maps of facet sizes are presented from four dipteran insects representing the Asilidae, Calliphoridae, and Stratiomyidae. These maps provide the first quantitative documentation of the frontal enlarged-facet zones (EFZs) that typify asilid eyes, which, together with the EFZs in male Calliphoridae, are likely to be correlated with high-spatial-resolution acute zones. The presence of EFZs contrasts sharply with the almost homogeneous distribution of facet sizes in the stratiomyid. Moreover, the shapes of EFZs differ among species, suggesting functional specializations that may reflect differences in visual ecology. Surveys of this nature can help identify species that should be targeted for additional studies, which will elucidate fundamental principles and constraints that govern visual field specializations and their evolution.
Prevalence and diversity of avian Haemosporida infecting songbirds in southwest Michigan.
Smith, Jamie D; Gill, Sharon A; Baker, Kathleen M; Vonhof, Maarten J
2018-02-01
Avian blood parasites from the genera Plasmodium, Haemoproteus, and Leucocytozoon (Haemosporida) affect hosts in numerous ways. They influence species interactions, host behavior, reproductive success, and cause pathology and mortality in birds. The Great Lakes region of North America has extensive aquatic and wetland habitat and supports a diverse vector community. Here we describe the community of bird-infecting Haemosporida in southwest Michigan and their host associations by measuring parasite prevalence, diversity, and host breadth across a diverse community of avian hosts. Over 700 songbirds of 55 species were screened for Haemosporida infection across southwest Michigan, including 11 species that were targeted for larger sample sizes. In total, 71 parasite lineages infected over 40% of birds. Of these, 42 were novel, yet richness estimates suggest that approximately half of the actual parasite diversity in the host community was observed despite intensive sampling of multiple host species. Parasite prevalence varied among parasite genera (7-24%) and target host species (0-85%), and parasite diversity was consistently high across most target species. Host breadth varied widely across the most prevalent parasite lineages, and we detected around 60% of host species richness for these parasite lineages. We report many new lineages and novel host-parasite associations, but substantial parasite diversity remains undiscovered in the Midwest.
Social anhedonia, but not positive schizotypy, is associated with poor affective control.
Martin, Elizabeth A; Cicero, David C; Kerns, John G
2012-07-01
Emotion researchers have distinguished between automatic versus controlled processing of affective information. One previous study with a small sample size found that extreme levels of social anhedonia (SocAnh) in college students, which predicts future schizophrenia-spectrum disorders, is associated with problems in controlled affective processing on a primed evaluation task. The current study examined whether in a larger college student sample SocAnh but not elevated perceptual aberration/magical ideation (PerMag) was associated with poor controlled affective processing. On the primed evaluation task, primes and targets could be either affectively congruent or incongruent and participants judged the valence of targets. Previous research on this task has found that participants appear to use controlled processing in an attempt to counteract the influence of the prime in evaluating the target. In this study, compared to the PerMag (n = 48) and control groups (n = 338), people with extreme levels of social anhedonia (n = 62) exhibited increased affective interference as they were slower for incongruent than for congruent trials. In contrast, there were no differences between the PerMag and control groups. Overall, these results suggest that SocAnh, but not PerMag, is associated with poor controlled affective processing. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Sputtering erosion in ion and plasma thrusters
NASA Technical Reports Server (NTRS)
Ray, Pradosh K.
1995-01-01
An experimental set-up to measure low-energy (below 1 keV) sputtering of materials is described. The materials to be bombarded represent ion thruster components as well as insulators used in the stationary plasma thruster. The sputtering takes place in a 9 inch diameter spherical vacuum chamber. Ions of argon, krypton and xenon are used to bombard the target materials. The sputtered neutral atoms are detected by a secondary neutral mass spectrometer (SNMS). Samples of copper, nickel, aluminum, silver and molybdenum are being sputtered initially to calibrate the spectrometer. The base pressure of the chamber is approximately 2 x 10(exp -9) Torr. the primary ion beam is generated by an ion gun which is capable of delivering ion currents in the range of 20 to 500 nA. The ion beam can be focused to a size approximately 1 mm in diameter. The mass spectrometer is positioned 10 mm from the target and at 90 deg angle to the primary ion beam direction. The ion beam impinges on the target at 45 deg. For sputtering of insulators, charge neutralization is performed by flooding the sample with electrons generated from an electron gun. Preliminary sputtering results, methods of calculating the instrument response function of the spectrometer and the relative sensitivity factors of the sputtered elements will be discussed.
Feist, Peter; Hummon, Amanda B.
2015-01-01
Proteins regulate many cellular functions and analyzing the presence and abundance of proteins in biological samples are central focuses in proteomics. The discovery and validation of biomarkers, pathways, and drug targets for various diseases can be accomplished using mass spectrometry-based proteomics. However, with mass-limited samples like tumor biopsies, it can be challenging to obtain sufficient amounts of proteins to generate high-quality mass spectrometric data. Techniques developed for macroscale quantities recover sufficient amounts of protein from milligram quantities of starting material, but sample losses become crippling with these techniques when only microgram amounts of material are available. To combat this challenge, proteomicists have developed micro-scale techniques that are compatible with decreased sample size (100 μg or lower) and still enable excellent proteome coverage. Extraction, contaminant removal, protein quantitation, and sample handling techniques for the microgram protein range are reviewed here, with an emphasis on liquid chromatography and bottom-up mass spectrometry-compatible techniques. Also, a range of biological specimens, including mammalian tissues and model cell culture systems, are discussed. PMID:25664860
NASA Astrophysics Data System (ADS)
Carpino, Francesca
In the last few decades, the development and use of nanotechnology has become of increasing importance. Magnetic nanoparticles, because of their unique properties, have been employed in many different areas of application. They are generally made of a core of magnetic material coated with some other material to stabilize them and to help disperse them in suspension. The unique feature of magnetic nanoparticles is their response to a magnetic field. They are generally superparamagnetic, in which case they become magnetized only in a magnetic field and lose their magnetization when the field is removed. It is this feature that makes them so useful for drug targeting, hyperthermia and bioseparation. For many of these applications, the synthesis of uniformly sized magnetic nanoparticles is of key importance because their magnetic properties depend strongly on their dimensions. Because of the difficulty of synthesizing monodisperse particulate materials, a technique capable of characterizing the magnetic properties of polydisperse samples is of great importance. Quadrupole magnetic field-flow fractionation (MgFFF) is a technique capable of fractionating magnetic particles based on their content of magnetite or other magnetic material. In MgFFF, the interplay of hydrodynamic and magnetic forces separates the particles as they are carried along a separation channel. Since the magnetic field and the gradient in magnetic field acting on the particles during their migration are known, it is possible to calculate the quantity of magnetic material in the particles according to their time of emergence at the channel outlet. Knowing the magnetic properties of the core material, MgFFF can be used to determine both the size distribution and the mean size of the magnetic cores of polydisperse samples. When magnetic material is distributed throughout the volume of the particles, the derived data corresponds to a distribution in equivalent spherical diameters of magnetic material in the particles. MgFFF is unique in its ability to characterize the distribution in magnetic properties of a particulate sample. This knowledge is not only of importance to the optimization and quality control of particle preparation. It is also of great importance in modeling magnetic cell separation, drug targeting, hyperthermia, and other areas of application.
Descartes region - Evidence for Copernican-age volcanism.
NASA Technical Reports Server (NTRS)
Head, J. W., III; Goetz, A. F. H.
1972-01-01
A model that suggests that the high-albedo central region of the Descartes Formation was formed by Copernican-age volcanism was developed from Orbiter photography, Apollo 12 multispectral photography, earth-based spectrophotometry, and thermal IR and radar data. The bright surface either is abundant in centimeter-sized rocks or is formed from an insulating debris layer overlying a surface with an abundance of rocks in the 1- to 20-cm size range. On the basis of these data, the bright unit is thought to be a young pyroclastic deposit mantling older volcanic units of the Descartes Formation. Since the Apollo 16 target point is only 50 km NW of the central part of this unit, evidence for material associated with this unique highland formation should be searched for in returned soil and rock samples.
Analysis of calibration accuracy of cameras with different target sizes for large field of view
NASA Astrophysics Data System (ADS)
Zhang, Jin; Chai, Zhiwen; Long, Changyu; Deng, Huaxia; Ma, Mengchao; Zhong, Xiang; Yu, Huan
2018-03-01
Visual measurement plays an increasingly important role in the field o f aerospace, ship and machinery manufacturing. Camera calibration of large field-of-view is a critical part of visual measurement . For the issue a large scale target is difficult to be produced, and the precision can not to be guaranteed. While a small target has the advantage of produced of high precision, but only local optimal solutions can be obtained . Therefore, studying the most suitable ratio of the target size to the camera field of view to ensure the calibration precision requirement of the wide field-of-view is required. In this paper, the cameras are calibrated by a series of different dimensions of checkerboard calibration target s and round calibration targets, respectively. The ratios of the target size to the camera field-of-view are 9%, 18%, 27%, 36%, 45%, 54%, 63%, 72%, 81% and 90%. The target is placed in different positions in the camera field to obtain the camera parameters of different positions . Then, the distribution curves of the reprojection mean error of the feature points' restructure in different ratios are analyzed. The experimental data demonstrate that with the ratio of the target size to the camera field-of-view increas ing, the precision of calibration is accordingly improved, and the reprojection mean error changes slightly when the ratio is above 45%.
Buck Louis, Germaine M; Schisterman, Enrique F; Sweeney, Anne M; Wilcosky, Timothy C; Gore-Langton, Robert E; Lynch, Courtney D; Boyd Barr, Dana; Schrader, Steven M; Kim, Sungduk; Chen, Zhen; Sundaram, Rajeshwari
2011-09-01
The relationship between the environment and human fecundity and fertility remains virtually unstudied from a couple-based perspective in which longitudinal exposure data and biospecimens are captured across sensitive windows. In response, we completed the LIFE Study with methodology that intended to empirically evaluate a priori purported methodological challenges: implementation of population-based sampling frameworks suitable for recruiting couples planning pregnancy; obtaining environmental data across sensitive windows of reproduction and development; home-based biospecimen collection; and development of a data management system for hierarchical exposome data. We used two sampling frameworks (i.e., fish/wildlife licence registry and a direct marketing database) for 16 targeted counties with presumed environmental exposures to persistent organochlorine chemicals to recruit 501 couples planning pregnancies for prospective longitudinal follow-up while trying to conceive and throughout pregnancy. Enrolment rates varied from <1% of the targeted population (n = 424,423) to 42% of eligible couples who were successfully screened; 84% of the targeted population could not be reached, while 36% refused screening. Among enrolled couples, ∼ 85% completed daily journals while trying; 82% of pregnant women completed daily early pregnancy journals, and 80% completed monthly pregnancy journals. All couples provided baseline blood/urine samples; 94% of men provided one or more semen samples and 98% of women provided one or more saliva samples. Women successfully used urinary fertility monitors for identifying ovulation and home pregnancy test kits. Couples can be recruited for preconception cohorts and will comply with intensive data collection across sensitive windows. However, appropriately sized sampling frameworks are critical, given the small percentage of couples contacted found eligible and reportedly planning pregnancy at any point in time. © Published 2011. This article is a US Government work and is in the public domain in the USA.
Griffiths, Paula L; Balakrishna, Nagalla; Fernandez Rao, Sylvia; Johnson, William
2016-01-01
In total, 3.1 million young children die every year from under-nutrition. Greater understanding of associations between socio-economic status (SES) and the biological factors that shape under-nutrition are required to target interventions. To establish whether SES inequalities in under-nutrition, proxied by infant size at 12 months, operate through maternal and early infant size measures. The sample comprised 347 Indian infants born in 60 villages in rural Andhra Pradesh 2005-2007. Structural equation path models were applied to decompose the total relationship between SES (standard of living index) and length and weight for age Z-scores (LAZ/WAZ) at 12 months into direct and indirect (operating through maternal BMI and height, birthweight Z-score and LAZ/WAZ at 6 months) paths. SES had a direct positive association with LAZ (Standardised coefficient = 0.08, 95% CI = 0.02-0.13) and WAZ at age 12 months (Standardised coefficient = 0.08, 95% CI = 0.02-0.15). It also had additional indirect positive associations through increased maternal height and subsequently increased birthweight and WAZ/LAZ at 6 months, accounting for 35% and 53% of the total effect for WAZ and LAZ, respectively. Findings support targeting evidence based growth interventions towards infants from the poorest families with the shortest mothers. Increasing SES can improve growth for two generations.
Impedance modulation and feedback corrections in tracking targets of variable size and frequency.
Selen, Luc P J; van Dieën, Jaap H; Beek, Peter J
2006-11-01
Humans are able to adjust the accuracy of their movements to the demands posed by the task at hand. The variability in task execution caused by the inherent noisiness of the neuromuscular system can be tuned to task demands by both feedforward (e.g., impedance modulation) and feedback mechanisms. In this experiment, we studied both mechanisms, using mechanical perturbations to estimate stiffness and damping as indices of impedance modulation and submovement scaling as an index of feedback driven corrections. Eight subjects tracked three differently sized targets (0.0135, 0.0270, and 0.0405 rad) moving at three different frequencies (0.20, 0.25, and 0.33 Hz). Movement variability decreased with both decreasing target size and movement frequency, whereas stiffness and damping increased with decreasing target size, independent of movement frequency. These results are consistent with the theory that mechanical impedance acts as a filter of noisy neuromuscular signals but challenge stochastic theories of motor control that do not account for impedance modulation and only partially for feedback control. Submovements during unperturbed cycles were quantified in terms of their gain, i.e., the slope between their duration and amplitude in the speed profile. Submovement gain decreased with decreasing movement frequency and increasing target size. The results were interpreted to imply that submovement gain is related to observed tracking errors and that those tracking errors are expressed in units of target size. We conclude that impedance and submovement gain modulation contribute additively to tracking accuracy.
Identifying On-Orbit Test Targets for Space Fence Operational Testing
NASA Astrophysics Data System (ADS)
Pechkis, D.; Pacheco, N.; Botting, T.
2014-09-01
Space Fence will be an integrated system of two ground-based, S-band (2 to 4 GHz) phased-array radars located in Kwajalein and perhaps Western Australia [1]. Space Fence will cooperate with other Space Surveillance Network sensors to provide space object tracking and radar characterization data to support U.S. Strategic Command space object catalog maintenance and other space situational awareness needs. We present a rigorous statistical test design intended to test Space Fence to the letter of the program requirements as well as to characterize the system performance across the entire operational envelope. The design uses altitude, size, and inclination as independent factors in statistical tests of dependent variables (e.g., observation accuracy) linked to requirements. The analysis derives the type and number of necessary test targets. Comparing the resulting sample sizes with the number of currently known targets, we identify those areas where modelling and simulation methods are needed. Assuming hypothetical Kwajalein radar coverage and a conservative number of radar passes per object per day, we conclude that tests involving real-world space objects should take no more than 25 days to evaluate all operational requirements; almost 60 percent of the requirements can be tested in a single day and nearly 90 percent can be tested in one week or less. Reference: [1] L. Haines and P. Phu, Space Fence PDR Concept Development Phase, 2011 AMOS Conference Technical Papers.
Is Ginkgo biloba a cognitive enhancer in healthy individuals? A meta-analysis.
Laws, Keith R; Sweetnam, Hilary; Kondel, Tejinder K
2012-11-01
We conducted a meta-analysis to examine whether Ginkgo biloba (G. biloba) enhances cognitive function in healthy individuals. Scopus, Medline, Google Scholar databases and recent qualitative reviews were searched for studies examining the effects of G. biloba on cognitive function in healthy individuals. We identified randomised controlled trials containing data on memory (K = 13), executive function (K = 7) and attention (K = 8) from which effect sizes could be derived. The analyses provided measures of memory, executive function and attention in 1132, 534 and 910 participants, respectively. Effect sizes were non-significant and close to zero for memory (d = -0.04: 95%CI -0.17 to 0.07), executive function (d = -0.05: 95%CI -0.17 to 0.05) and attention (d = -0.08: 95%CI -0.21 to 0.02). Meta-regressions showed that effect sizes were not related to participant age, duration of the trial, daily dose, total dose or sample size. We report that G. biloba had no ascertainable positive effects on a range of targeted cognitive functions in healthy individuals. Copyright © 2012 John Wiley & Sons, Ltd.
Maximova, Katerina; Khan, Mohammad K A; Austin, S Bryn; Kirk, Sara F L; Veugelers, Paul J
2015-10-01
Underestimating body size hinders healthy behavior modification needed to prevent obesity. However, initiatives to improve body size misperceptions may have detrimental consequences on self-esteem and self-efficacy. Using sex-specific multiple mixed-effect logistic regression models, we examined the association of underestimating versus accurate body size perceptions with self-esteem and self-efficacy in a provincially representative sample of 5075 grade five school children. Body size perceptions were defined as the standardized difference between the body mass index (BMI, from measured height and weight) and self-perceived body size (Stunkard body rating scale). Self-esteem and self-efficacy for physical activity and healthy eating were self-reported. Most of overweight boys and girls (91% and 83%); and most of obese boys and girls (93% and 90%) underestimated body size. Underestimating weight was associated with greater self-efficacy for physical activity and healthy eating among normal-weight children (odds ratio: 1.9 and 1.6 for boys, 1.5 and 1.4 for girls) and greater self-esteem among overweight and obese children (odds ratio: 2.0 and 6.2 for boys, 2.0 and 3.4 for girls). Results highlight the importance of developing optimal intervention strategies as part of targeted obesity prevention efforts that de-emphasize the focus on body weight, while improving body size perceptions. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
van der Bogert, C. H.; Hiesinger, H.; Dundas, C. M.; Krüger, T.; McEwen, A. S.; Zanetti, M.; Robinson, M. S.
2017-12-01
Recent work on dating Copernican-aged craters, using Lunar Reconnaissance Orbiter (LRO) Camera data, re-encountered a curious discrepancy in crater size-frequency distribution (CSFD) measurements that was observed, but not understood, during the Apollo era. For example, at Tycho, Copernicus, and Aristarchus craters, CSFDs of impact melt deposits give significantly younger relative and absolute model ages (AMAs) than impact ejecta blankets, although these two units formed during one impact event, and would ideally yield coeval ages at the resolution of the CSFD technique. We investigated the effects of contrasting target properties on CSFDs and their resultant relative and absolute model ages for coeval lunar impact melt and ejecta units. We counted craters with diameters through the transition from strength- to gravity-scaling on two large impact melt deposits at Tycho and King craters, and we used pi-group scaling calculations to model the effects of differing target properties on final crater diameters for five different theoretical lunar targets. The new CSFD for the large King Crater melt pond bridges the gap between the discrepant CSFDs within a single geologic unit. Thus, the observed trends in the impact melt CSFDs support the occurrence of target property effects, rather than self-secondary and/or field secondary contamination. The CSFDs generated from the pi-group scaling calculations show that targets with higher density and effective strength yield smaller crater diameters than weaker targets, such that the relative ages of the former are lower relative to the latter. Consequently, coeval impact melt and ejecta units will have discrepant apparent ages. Target property differences also affect the resulting slope of the CSFD, with stronger targets exhibiting shallower slopes, so that the final crater diameters may differ more greatly at smaller diameters. Besides their application to age dating, the CSFDs may provide additional information about the characteristics of the target. For example, the transition diameter from strength- to gravity-scaling could provide a tool for investigating the relative strengths of different geologic units. The magnitude of the offset between the impact melt and ejecta isochrons may also provide information about the relative target properties and/or exposure/degradation ages of the two units. Robotic or human sampling of coeval units on the Moon could provide a direct test of the importance and magnitude of target property effects on CSFDs.
Van der Bogert, Carolyn H.; Hiesinger, Harald; Dundas, Colin M.; Kruger, T.; McEwen, Alfred S.; Zanetti, Michael; Robinson, Mark S.
2017-01-01
Recent work on dating Copernican-aged craters, using Lunar Reconnaissance Orbiter (LRO) Camera data, re-encountered a curious discrepancy in crater size-frequency distribution (CSFD) measurements that was observed, but not understood, during the Apollo era. For example, at Tycho, Copernicus, and Aristarchus craters, CSFDs of impact melt deposits give significantly younger relative and absolute model ages (AMAs) than impact ejecta blankets, although these two units formed during one impact event, and would ideally yield coeval ages at the resolution of the CSFD technique. We investigated the effects of contrasting target properties on CSFDs and their resultant relative and absolute model ages for coeval lunar impact melt and ejecta units. We counted craters with diameters through the transition from strength- to gravity-scaling on two large impact melt deposits at Tycho and King craters, and we used pi-group scaling calculations to model the effects of differing target properties on final crater diameters for five different theoretical lunar targets. The new CSFD for the large King Crater melt pond bridges the gap between the discrepant CSFDs within a single geologic unit. Thus, the observed trends in the impact melt CSFDs support the occurrence of target property effects, rather than self-secondary and/or field secondary contamination. The CSFDs generated from the pi-group scaling calculations show that targets with higher density and effective strength yield smaller crater diameters than weaker targets, such that the relative ages of the former are lower relative to the latter. Consequently, coeval impact melt and ejecta units will have discrepant apparent ages. Target property differences also affect the resulting slope of the CSFD, with stronger targets exhibiting shallower slopes, so that the final crater diameters may differ more greatly at smaller diameters. Besides their application to age dating, the CSFDs may provide additional information about the characteristics of the target. For example, the transition diameter from strength- to gravity-scaling could provide a tool for investigating the relative strengths of different geologic units. The magnitude of the offset between the impact melt and ejecta isochrons may also provide information about the relative target properties and/or exposure/degradation ages of the two units. Robotic or human sampling of coeval units on the Moon could provide a direct test of the importance and magnitude of target property effects on CSFDs.
Immediate Judgments of Learning are Insensitive to Implicit Interference Effects at Retrieval
Eakin, Deborah K.; Hertzog, Christopher
2013-01-01
We conducted three experiments to determine whether metamemory predictions at encoding, immediate judgments of learning (IJOLs) are sensitive to implicit interference effects that will occur at retrieval. Implicit interference was manipulated by varying the association set size of the cue (Exps. 1 & 2) or the target (Exp. 3). The typical finding is that memory is worse for large-set-size cues and targets, but only when the target is studied alone and later prompted with a related cue (extralist). When the pairs are studied together (intralist), recall is the same regardless of set size; set-size effects are eliminated. Metamemory predictions at retrieval, such as delayed JOLs (DJOLs) and feeling of knowing (FOK) judgments accurately reflect implicit interference effects (e.g., Eakin & Hertzog, 2006). In Experiment 1, we contrasted cue-set-size effects on IJOLs, DJOLs, and FOKs. After wrangling with an interesting methodological conundrum related to set size effects (Exp. 2), we found that whereas DJOLs and FOKs accurately predicted set size effects on retrieval, a comparison between IJOLs and no-cue IJOLs demonstrated that immediate judgments did not vary with set size. In Experiment 3, we confirmed this finding by manipulating target set size. Again, IJOLs did not vary with set size whereas DJOLs and FOKs did. The findings provide further evidence for the inferential view regarding the source of metamemory predictions, as well as indicate that inferences are based on different sources depending on when in the memory process predictions are made. PMID:21915761
Role of Beam Spot Size in Heating Targets at Depth.
Ross, E Victor; Childs, James
2015-12-01
Wavelength, fluence and pulse width are primary device parameters for the treatment of skin and hair conditions. Wavelength selection is based on tissue scatter and target chromophores. Pulse width is chosen to optimize target heating. Energy absorbed by a target is determined by fluence and spot size of the light source as well as the depth of the target. We conducted an in vitro skin study and simulations to compare heating of a target at a particular depth versus spot size. Porcine skin and fat tissue were prepared and separated to form a 2mm skin layer above a 1 cm thick fat layer. A 50 μm thermocouple was placed between the layers and centered beneath a 23 x 38 mm treatment window of an 805 nm diode laser device (Vectus, Cynosure, Westford, MA). Apertures provided various incident beam spot sizes and the temperature rise of the thermocouple was measured for a fixed fluence. The 2mm deep target's temperature rise versus treatment area showed two regimes with different positive slopes. The first regime up to approximately 1 cm(2) area has a greater temperature rise versus area than that for the regime greater than 1 cm(2). The slope in the second regime is nonetheless appreciable and provides a fluence reduction factor for skin safety. The same temperature rise in a target at 2 mm depth (typical hair bulb depth in some areas) is realized by increasing the area from 1 to 4 cm(2) while reducing the fluence by half. The role of spot size and in situ beam divergence is an important consideration to determine optimum fluence settings that increase skin safety when treating deeper targets.
Parallel coding of conjunctions in visual search.
Found, A
1998-10-01
Two experiments investigated whether the conjunctive nature of nontarget items influenced search for a conjunction target. Each experiment consisted of two conditions. In both conditions, the target item was a red bar tilted to the right, among white tilted bars and vertical red bars. As well as color and orientation, display items also differed in terms of size. Size was irrelevant to search in that the size of the target varied randomly from trial to trial. In one condition, the size of items correlated with the other attributes of display items (e.g., all red items were big and all white items were small). In the other condition, the size of items varied randomly (i.e., some red items were small and some were big, and some white items were big and some were small). Search was more efficient in the size-correlated condition, consistent with the parallel coding of conjunctions in visual search.
Leszczuk, Mikołaj; Dudek, Łukasz; Witkowski, Marcin
The VQiPS (Video Quality in Public Safety) Working Group, supported by the U.S. Department of Homeland Security, has been developing a user guide for public safety video applications. According to VQiPS, five parameters have particular importance influencing the ability to achieve a recognition task. They are: usage time-frame, discrimination level, target size, lighting level, and level of motion. These parameters form what are referred to as Generalized Use Classes (GUCs). The aim of our research was to develop algorithms that would automatically assist classification of input sequences into one of the GUCs. Target size and lighting level parameters were approached. The experiment described reveals the experts' ambiguity and hesitation during the manual target size determination process. However, the automatic methods developed for target size classification make it possible to determine GUC parameters with 70 % compliance to the end-users' opinion. Lighting levels of the entire sequence can be classified with an efficiency reaching 93 %. To make the algorithms available for use, a test application has been developed. It is able to process video files and display classification results, the user interface being very simple and requiring only minimal user interaction.
GPR Imaging of Prehistoric Animal Bone-beds
NASA Astrophysics Data System (ADS)
Schneider, Blair Benson
This research investigates the detection capabilities of Ground-penetrating radar for imaging prehistoric animal bone-beds. The first step of this investigation was to determine the dielectric properties of modern animal bone as a proxy for applying non-invasive ground-penetrating radar (GPR) for detecting prehistoric animal remains. Over 90 thin section samples were cut from four different modern faunal skeleton remains: bison, cow, deer, and elk. One sample of prehistoric mammoth core was also analyzed. Sample dielectric properties (relative permittivity, loss factor, and loss-tangent values) were measured with an impedance analyzer over frequencies ranging from 10 MHz to 1 GHz. The results reveal statistically significant dielectric-property differences among different animal fauna, as well as variation as a function of frequency. The measured sample permittivity values were then compared to modeled sample permittivity values using common dielectric-mixing models. The dielectric mixing models were used to report out new reported values of dry bone mineral of 3-5 in the frequency range of 10 MHz to 1 GHz. The second half of this research collected controlled GPR experiments over a sandbox containing buried bison bone elements to evaluate GPR detection capabilities of buried animal bone. The results of the controlled GPR sandbox tests were then compared to numerical models in order to predict the ability of GPR to detect buried animal bone given a variety of different depositional factors, the size and orientation of the bone target and the degree of bone weathering. The radar profiles show that GPR is an effective method for imaging the horizontal and vertical extent of buried animal bone. However, increased bone weathering and increased bone dip were both found to affect GPR reflection signal strength. Finally, the controlled sandbox experiments were also utilized to investigate the impact of survey design for imaging buried animal bone. In particular, the effects of GPR antenna orientation relative to the survey line (broad-side mode versus end-fire mode) and polarization effects of the buried bone targets were investigated. The results reveal that animal bone does exhibit polarization effects. However, the polarization results are greatly affected by the irregular shape and size of the bone, which ultimately limits the potential usefulness of trying to utilize polarization data to determine the orientation of buried bone targets. In regard to antenna orientation, end-fire mode was found to have little difference in amplitude response as compared to the more commonly used broad-side mode and in fact sometimes outperformed the broad-side mode. Future GPR investigations should consider utilizing multiple antenna orientations during data collection.
NASA Technical Reports Server (NTRS)
Brand, R. R.; Barker, J. L.
1983-01-01
A multistage sampling procedure using image processing, geographical information systems, and analytical photogrammetry is presented which can be used to guide the collection of representative, high-resolution spectra and discrete reflectance targets for future satellite sensors. The procedure is general and can be adapted to characterize areas as small as minor watersheds and as large as multistate regions. Beginning with a user-determined study area, successive reductions in size and spectral variation are performed using image analysis techniques on data from the Multispectral Scanner, orbital and simulated Thematic Mapper, low altitude photography synchronized with the simulator, and associated digital data. An integrated image-based geographical information system supports processing requirements.
Lee, Ada; Park, Juhee; Lim, Minji; Sunkara, Vijaya; Kim, Shine Young; Kim, Gwang Ha; Kim, Mi-Hyun; Cho, Yoon-Kyoung
2014-11-18
Circulating tumor cells (CTCs) have gained increasing attention owing to their roles in cancer recurrence and progression. Due to the rarity of CTCs in the bloodstream, an enrichment process is essential for effective target cell characterization. However, in a typical pressure-driven microfluidic system, the enrichment process generally requires complicated equipment and long processing times. Furthermore, the commonly used immunoaffinity-based positive selection method is limited, as its recovery rate relies on EpCAM expression of target CTCs, which shows heterogeneity among cell types. Here, we propose a centrifugal-force-based size-selective CTC isolation platform that can isolate and enumerate CTCs from whole blood within 30 s with high purity. The device was validated using the MCF-7 breast cancer cell line spiked in phosphate-buffered saline and whole blood, and an average capture efficiency of 61% was achieved, which is typical for size-based filtration. The capture efficiency for whole blood samples varied from 44% to 84% under various flow conditions and dilution factors. Under the optimized operating conditions, a few hundred white blood cells per 1 mL of whole blood were captured, representing a 20-fold decrease compared to those obtained using a commercialized size-based CTC isolation device. In clinical validation, normalized CTC counts varied from 10 to 60 per 7.5 mL of blood from gastric and lung cancer patients, yielding a detection rate of 50% and 38%, respectively. Overall, our CTC isolation device enables rapid and label-free isolation of CTCs with high purity, which should greatly improve downstream molecular analyses of captured CTCs.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Rezos, Mary M; Schultz, John J; Murdock, Ronald A; Smith, Stephen A
2010-02-25
Incorporating geophysical technologies into forensic investigations has become a growing practice. Oftentimes, forensic professionals rely on basic metal detectors to assist their efforts during metallic weapons searches. This has created a need for controlled research in the area of weapons searches, specifically to formulate guidelines for geophysical methods that may be appropriate for locating weapons that have been discarded or buried by criminals attempting to conceal their involvement in a crime. Controlled research allows not only for testing of geophysical equipment, but also for updating search methodologies. This research project was designed to demonstrate the utility of an all-metal detector for locating a buried metallic weapon through detecting and identifying specific types of buried metal targets. Controlled testing of 32 buried targets which represented a variety of sizes and metallic compositions included 16 decommissioned street-level firearms, 6 pieces of assorted scrap metals, and 10 blunt or bladed weapons. While all forensic targets included in the project were detected with the basic all-metal detector, the size of the weapon and surface area were the two variables that affected maximum depth of detection, particularly with the firearm sample. For example, when using a High setting the largest firearms were detected at a maximum depth of 55 cm, but the majority of the remaining targets were only detected at a maximum depth of 40 cm or less. Overall, the all-metal detector proved to be a very good general purpose metal detector best suited for detecting metallic items at shallow depths. 2009 Elsevier Ireland Ltd. All rights reserved.
Characterization of uranium carbide target materials to produce neutron-rich radioactive beams
NASA Astrophysics Data System (ADS)
Tusseau-Nenez, Sandrine; Roussière, Brigitte; Barré-Boscher, Nicole; Gottberg, Alexander; Corradetti, Stefano; Andrighetto, Alberto; Cheikh Mhamed, Maher; Essabaa, Saïd; Franberg-Delahaye, Hanna; Grinyer, Joanna; Joanny, Loïc; Lau, Christophe; Le Lannic, Joseph; Raynaud, Marc; Saïd, Abdelhakim; Stora, Thierry; Tougait, Olivier
2016-03-01
In the framework of a R&D program aiming to develop uranium carbide (UCx) targets for radioactive nuclear beams, the Institut de Physique Nucléaire d'Orsay (IPNO) has developed an experimental setup to characterize the release of various fission fragments from UCx samples at high temperature. The results obtained in a previous study have demonstrated the feasibility of the method and started to correlate the structural properties of the samples and their behavior in terms of nuclear reaction product release. In the present study, seven UCx samples have been systematically characterized in order to better understand the correlation between their physicochemical characteristics and release properties. Two very different samples, the first one composed of dense UC and the second one of highly porous UCx made of multi-wall carbon nanotubes, were provided by the ActILab (ENSAR) collaboration. The others were synthesized at IPNO. The systems for irradiation and heating necessary for the release studies have been improved with respect to those used in previous studies. The results show that the open porosity is hardly the limiting factor for the fission product release. The homogeneity of the microstructure and the pore size distribution contributes significantly to the increase of the release. The use of carbon nanotubes in place of traditional micrometric graphite particles appears to be promising, even if the homogeneity of the microstructure can still be enhanced.
Cuadros-Inostroza, Alvaro; Caldana, Camila; Redestig, Henning; Kusano, Miyako; Lisec, Jan; Peña-Cortés, Hugo; Willmitzer, Lothar; Hannah, Matthew A
2009-12-16
Metabolite profiling, the simultaneous quantification of multiple metabolites in an experiment, is becoming increasingly popular, particularly with the rise of systems-level biology. The workhorse in this field is gas-chromatography hyphenated with mass spectrometry (GC-MS). The high-throughput of this technology coupled with a demand for large experiments has led to data pre-processing, i.e. the quantification of metabolites across samples, becoming a major bottleneck. Existing software has several limitations, including restricted maximum sample size, systematic errors and low flexibility. However, the biggest limitation is that the resulting data usually require extensive hand-curation, which is subjective and can typically take several days to weeks. We introduce the TargetSearch package, an open source tool which is a flexible and accurate method for pre-processing even very large numbers of GC-MS samples within hours. We developed a novel strategy to iteratively correct and update retention time indices for searching and identifying metabolites. The package is written in the R programming language with computationally intensive functions written in C for speed and performance. The package includes a graphical user interface to allow easy use by those unfamiliar with R. TargetSearch allows fast and accurate data pre-processing for GC-MS experiments and overcomes the sample number limitations and manual curation requirements of existing software. We validate our method by carrying out an analysis against both a set of known chemical standard mixtures and of a biological experiment. In addition we demonstrate its capabilities and speed by comparing it with other GC-MS pre-processing tools. We believe this package will greatly ease current bottlenecks and facilitate the analysis of metabolic profiling data.
2009-01-01
Background Metabolite profiling, the simultaneous quantification of multiple metabolites in an experiment, is becoming increasingly popular, particularly with the rise of systems-level biology. The workhorse in this field is gas-chromatography hyphenated with mass spectrometry (GC-MS). The high-throughput of this technology coupled with a demand for large experiments has led to data pre-processing, i.e. the quantification of metabolites across samples, becoming a major bottleneck. Existing software has several limitations, including restricted maximum sample size, systematic errors and low flexibility. However, the biggest limitation is that the resulting data usually require extensive hand-curation, which is subjective and can typically take several days to weeks. Results We introduce the TargetSearch package, an open source tool which is a flexible and accurate method for pre-processing even very large numbers of GC-MS samples within hours. We developed a novel strategy to iteratively correct and update retention time indices for searching and identifying metabolites. The package is written in the R programming language with computationally intensive functions written in C for speed and performance. The package includes a graphical user interface to allow easy use by those unfamiliar with R. Conclusions TargetSearch allows fast and accurate data pre-processing for GC-MS experiments and overcomes the sample number limitations and manual curation requirements of existing software. We validate our method by carrying out an analysis against both a set of known chemical standard mixtures and of a biological experiment. In addition we demonstrate its capabilities and speed by comparing it with other GC-MS pre-processing tools. We believe this package will greatly ease current bottlenecks and facilitate the analysis of metabolic profiling data. PMID:20015393
Thundat, Thomas G.; Oden, Patrick I.; Datskos, Panagiotis G.
2000-01-01
A non-contact infrared thermometer measures target temperatures remotely without requiring the ratio of the target size to the target distance to the thermometer. A collection means collects and focusses target IR radiation on an IR detector. The detector measures thermal energy of the target over a spectrum using micromechanical sensors. A processor means calculates the collected thermal energy in at least two different spectral regions using a first algorithm in program form and further calculates the ratio of the thermal energy in the at least two different spectral regions to obtain the target temperature independent of the target size, distance to the target and emissivity using a second algorithm in program form.
[Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].
Suzukawa, Yumi; Toyoda, Hideki
2012-04-01
This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.
Shape recognition of microbial cells by colloidal cell imprints
NASA Astrophysics Data System (ADS)
Borovička, Josef; Stoyanov, Simeon D.; Paunov, Vesselin N.
2013-08-01
We have engineered a class of colloids which can recognize the shape and size of targeted microbial cells and selectively bind to their surfaces. These imprinted colloid particles, which we called ``colloid antibodies'', were fabricated by partial fragmentation of silica shells obtained by templating the targeted microbial cells. We successfully demonstrated the shape and size recognition between such colloidal imprints and matching microbial cells. High percentage of binding events of colloidal imprints with the size matching target particles was achieved. We demonstrated selective binding of colloidal imprints to target microbial cells in a binary mixture of cells of different shapes and sizes, which also resulted in high binding selectivity. We explored the role of the electrostatic interactions between the target cells and their colloid imprints by pre-coating both of them with polyelectrolytes. Selective binding occurred predominantly in the case of opposite surface charges of the colloid cell imprint and the targeted cells. The mechanism of the recognition is based on the amplification of the surface adhesion in the case of shape and size match due to the increased contact area between the target cell and the colloidal imprint. We also tested the selective binding for colloid imprints of particles of fixed shape and varying sizes. The concept of cell recognition by colloid imprints could be used for development of colloid antibodies for shape-selective binding of microbes. Such colloid antibodies could be additionally functionalized with surface groups to enhance their binding efficiency to cells of specific shape and deliver a drug payload directly to their surface or allow them to be manipulated using external fields. They could benefit the pharmaceutical industry in developing selective antimicrobial therapies and formulations.
Sample Size Estimation: The Easy Way
ERIC Educational Resources Information Center
Weller, Susan C.
2015-01-01
This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…
The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education
ERIC Educational Resources Information Center
Slavin, Robert; Smith, Dewi
2009-01-01
Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…
Cherezov, Vadim; Hanson, Michael A.; Griffith, Mark T.; Hilgart, Mark C.; Sanishvili, Ruslan; Nagarajan, Venugopalan; Stepanov, Sergey; Fischetti, Robert F.; Kuhn, Peter; Stevens, Raymond C.
2009-01-01
Crystallization of human membrane proteins in lipidic cubic phase often results in very small but highly ordered crystals. Advent of the sub-10 µm minibeam at the APS GM/CA CAT has enabled the collection of high quality diffraction data from such microcrystals. Herein we describe the challenges and solutions related to growing, manipulating and collecting data from optically invisible microcrystals embedded in an opaque frozen in meso material. Of critical importance is the use of the intense and small synchrotron beam to raster through and locate the crystal sample in an efficient and reliable manner. The resulting diffraction patterns have a significant reduction in background, with strong intensity and improvement in diffraction resolution compared with larger beam sizes. Three high-resolution structures of human G protein-coupled receptors serve as evidence of the utility of these techniques that will likely be useful for future structural determination efforts. We anticipate that further innovations of the technologies applied to microcrystallography will enable the solving of structures of ever more challenging targets. PMID:19535414
Factors affecting computer mouse use for young children: implications for AAC.
Costigan, F Aileen; Light, Janice C; Newell, Karl M
2012-06-01
More than 12% of preschoolers receiving special education services have complex communication needs, including increasing numbers of children who do not have significant motor impairments (e.g., children with autism spectrum disorders, Down syndrome, etc.). In order to meet their diverse communication needs (e.g., face-to-face, written, Internet, telecommunication), these children may use mainstream technologies accessed via the mouse, yet little is known about factors that affect the mouse performance of young children. This study used a mixed factorial design to investigate the effects of age, target size, and angle of approach on accuracy and time required for accurate target selection with a mouse for 20 3-year-old and 20 4-year-old children. The 4-year-olds were generally more accurate and faster than the 3-year-olds. Target size and angle mediated differences in performance within age groups. The 3-year-olds were more accurate and faster in selecting the medium and large targets relative to the small target, were faster in selecting the large relative to the medium target, and were faster in selecting targets along the vertical relative to the diagonal angle. The 4-year-olds were faster in selecting the medium and large targets relative to the small target. Implications for improving access to AAC include the preliminary suggestion of age-related threshold target sizes that support sufficient accuracy, the possibility of efficiency benefits when target size is increased up to an age-related threshold, and identification of the potential utility of the vertical angle as a context for training navigational input device use.
Testing a single regression coefficient in high dimensional linear models
Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling
2017-01-01
In linear regression models with high dimensional data, the classical z-test (or t-test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z-test to assess the significance of each covariate. Based on the p-value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively. PMID:28663668
Testing a single regression coefficient in high dimensional linear models.
Lan, Wei; Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling
2016-11-01
In linear regression models with high dimensional data, the classical z -test (or t -test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z -test to assess the significance of each covariate. Based on the p -value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively.
Ballari, Rajashekhar V; Martin, Asha
2013-12-01
DNA quality is an important parameter for the detection and quantification of genetically modified organisms (GMO's) using the polymerase chain reaction (PCR). Food processing leads to degradation of DNA, which may impair GMO detection and quantification. This study evaluated the effect of various processing treatments such as heating, baking, microwaving, autoclaving and ultraviolet (UV) irradiation on the relative transgenic content of MON 810 maize using pRSETMON-02, a dual target plasmid as a model system. Amongst all the processing treatments examined, autoclaving and UV irradiation resulted in the least recovery of the transgenic (CaMV 35S promoter) and taxon-specific (zein) target DNA sequences. Although a profound impact on DNA degradation was seen during the processing, DNA could still be reliably quantified by Real-time PCR. The measured mean DNA copy number ratios of the processed samples were in agreement with the expected values. Our study confirms the premise that the final analytical value assigned to a particular sample is independent of the degree of DNA degradation since the transgenic and the taxon-specific target sequences possessing approximately similar lengths degrade in parallel. The results of our study demonstrate that food processing does not alter the relative quantification of the transgenic content provided the quantitative assays target shorter amplicons and the difference in the amplicon size between the transgenic and taxon-specific genes is minimal. Copyright © 2013 Elsevier Ltd. All rights reserved.
Phylogenetic effective sample size.
Bartoszek, Krzysztof
2016-10-21
In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Singh, Amandeep; Vihinen, Jorma; Frankberg, Erkka; Hyvärinen, Leo; Honkanen, Mari; Levänen, Erkki
2016-12-01
This paper aims to introduce small angle X-ray scattering (SAXS) as a promising technique for measuring size and size distribution of TiO 2 nanoparticles. In this manuscript, pulsed laser ablation in liquids (PLAL) has been demonstrated as a quick and simple technique for synthesizing TiO 2 nanoparticles directly into deionized water as a suspension from titanium targets. Spherical TiO 2 nanoparticles with diameters in the range 4-35 nm were observed with transmission electron microscopy (TEM). X-ray diffraction (XRD) showed highly crystalline nanoparticles that comprised of two main photoactive phases of TiO 2 : anatase and rutile. However, presence of minor amounts of brookite was also reported. The traditional methods for nanoparticle size and size distribution analysis such as electron microscopy-based methods are time-consuming. In this study, we have proposed and validated SAXS as a promising method for characterization of laser-ablated TiO 2 nanoparticles for their size and size distribution by comparing SAXS- and TEM-measured nanoparticle size and size distribution. SAXS- and TEM-measured size distributions closely followed each other for each sample, and size distributions in both showed maxima at the same nanoparticle size. The SAXS-measured nanoparticle diameters were slightly larger than the respective diameters measured by TEM. This was because SAXS measures an agglomerate consisting of several particles as one big particle which slightly increased the mean diameter. TEM- and SAXS-measured mean diameters when plotted together showed similar trend in the variation in the size as the laser power was changed which along with extremely similar size distributions for TEM and SAXS validated the application of SAXS for size distribution measurement of the synthesized TiO 2 nanoparticles.
The endothelial sample size analysis in corneal specular microscopy clinical examinations.
Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci
2012-05-01
To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.
In Situ Balloon-Borne Ice Particle Imaging in High-Latitude Cirrus
NASA Astrophysics Data System (ADS)
Kuhn, Thomas; Heymsfield, Andrew J.
2016-09-01
Cirrus clouds reflect incoming solar radiation, creating a cooling effect. At the same time, these clouds absorb the infrared radiation from the Earth, creating a greenhouse effect. The net effect, crucial for radiative transfer, depends on the cirrus microphysical properties, such as particle size distributions and particle shapes. Knowledge of these cloud properties is also needed for calibrating and validating passive and active remote sensors. Ice particles of sizes below 100 µm are inherently difficult to measure with aircraft-mounted probes due to issues with resolution, sizing, and size-dependent sampling volume. Furthermore, artefacts are produced by shattering of particles on the leading surfaces of the aircraft probes when particles several hundred microns or larger are present. Here, we report on a series of balloon-borne in situ measurements that were carried out at a high-latitude location, Kiruna in northern Sweden (68N 21E). The method used here avoids these issues experienced with the aircraft probes. Furthermore, with a balloon-borne instrument, data are collected as vertical profiles, more useful for calibrating or evaluating remote sensing measurements than data collected along horizontal traverses. Particles are collected on an oil-coated film at a sampling speed given directly by the ascending rate of the balloon, 4 m s-1. The collecting film is advanced uniformly inside the instrument so that an always unused section of the film is exposed to ice particles, which are measured by imaging shortly after sampling. The high optical resolution of about 4 µm together with a pixel resolution of 1.65 µm allows particle detection at sizes of 10 µm and larger. For particles that are 20 µm (12 pixel) in size or larger, the shape can be recognized. The sampling volume, 130 cm3 s-1, is well defined and independent of particle size. With the encountered number concentrations of between 4 and 400 L-1, this required about 90- to 4-s sampling times to determine particle size distributions of cloud layers. Depending on how ice particles vary through the cloud, several layers per cloud with relatively uniform properties have been analysed. Preliminary results of the balloon campaign, targeting upper tropospheric, cold cirrus clouds, are presented here. Ice particles in these clouds were predominantly very small, with a median size of measured particles of around 50 µm and about 80 % of all particles below 100 µm in size. The properties of the particle size distributions at temperatures between -36 and -67 °C have been studied, as well as particle areas, extinction coefficients, and their shapes (area ratios). Gamma and log-normal distribution functions could be fitted to all measured particle size distributions achieving very good correlation with coefficients R of up to 0.95. Each distribution features one distinct mode. With decreasing temperature, the mode diameter decreases exponentially, whereas the total number concentration increases by two orders of magnitude with decreasing temperature in the same range. The high concentrations at cold temperatures also caused larger extinction coefficients, directly determined from cross-sectional areas of single ice particles, than at warmer temperatures. The mass of particles has been estimated from area and size. Ice water content (IWC) and effective diameters are then determined from the data. IWC did vary only between 1 × 10-3 and 5 × 10-3 g m-3 at temperatures below -40 °C and did not show a clear temperature trend. These measurements are part of an ongoing study.
Sample size determination for mediation analysis of longitudinal data.
Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying
2018-03-27
Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.
The Mission Accessible Near-Earth Object Survey (MANOS): Project Status
NASA Astrophysics Data System (ADS)
Moskovitz, Nicholas; Thirouin, Audrey; Mommert, Michael; Thomas, Cristina A.; Skiff, Brian; Polishook, David; Burt, Brian; Trilling, David E.; DeMeo, Francesca E.; Binzel, Richard P.; Christensen, Eric J.; Willman, Mark; Hinkle, Mary
2017-10-01
The Mission Accessible Near-Earth Object Survey (MANOS) is a physical characterization survey of sub-km, low delta-v, newly discovered near-Earth objects (NEOs). MANOS aims to collect astrometry, lightcurve photometry, and reflectance spectra for a representative sample of these important target of opportunity objects in a rarely observed size range. We employ a diverse set of large aperture (2-8 meter) telescopes and observing modes (queue, remote, classical) to overcome the challenge of observing faint NEOs moving at high non-sidereal rates with short observing windows. We target approximately 10% of newly discovered NEOs every month for follow-up characterization.The first generation MANOS ran from late 2013 to early 2017, using telescopes at Lowell Observatory, NOAO, and the University of Hawaii. This resulted in the collection of data for over 500 targets. These data are continuing to provide new insights into the NEO population as a whole as well as for individual objects of interest. Science highlights include identification of the four fastest rotating minor planets found to date with rotation periods under 20 seconds, constraints on the distribution of NEO morphologies as quantified by de-biased estimates for lightcurve-derived axis ratios, and the compositional distribution of NEOs at sizes under 100 meters.The second generation MANOS will begin in late 2017 and will employ much of the same strategies while continuing to build a comprehensive dataset of NEO physical properties. This will grow the MANOS sample to ~1000 objects and provide the means to better address key questions related to understanding the physical properties of NEOs, their viability as exploration mission targets, and their relationship to Main Belt asteroids and meteorites. This continuation of MANOS will include an increased focus on spectroscopic observations at near-IR wavelengths using a new instrument called NIHTS (the Near-Infrared High-Throughput Spectrograph) at Lowell Observatory’s 4.3m Discovery Channel Telescope.We will present key results from the first generation survey and current status and plans for the second generation survey. MANOS is supported by the NASA SSO/NEOO program.
Optimal Inspection of Imports to Prevent Invasive Pest Introduction.
Chen, Cuicui; Epanchin-Niell, Rebecca S; Haight, Robert G
2018-03-01
The United States imports more than 1 billion live plants annually-an important and growing pathway for introduction of damaging nonnative invertebrates and pathogens. Inspection of imports is one safeguard for reducing pest introductions, but capacity constraints limit inspection effort. We develop an optimal sampling strategy to minimize the costs of pest introductions from trade by posing inspection as an acceptance sampling problem that incorporates key features of the decision context, including (i) simultaneous inspection of many heterogeneous lots, (ii) a lot-specific sampling effort, (iii) a budget constraint that limits total inspection effort, (iv) inspection error, and (v) an objective of minimizing cost from accepted defective units. We derive a formula for expected number of accepted infested units (expected slippage) given lot size, sample size, infestation rate, and detection rate, and we formulate and analyze the inspector's optimization problem of allocating a sampling budget among incoming lots to minimize the cost of slippage. We conduct an empirical analysis of live plant inspection, including estimation of plant infestation rates from historical data, and find that inspections optimally target the largest lots with the highest plant infestation rates, leaving some lots unsampled. We also consider that USDA-APHIS, which administers inspections, may want to continue inspecting all lots at a baseline level; we find that allocating any additional capacity, beyond a comprehensive baseline inspection, to the largest lots with the highest infestation rates allows inspectors to meet the dual goals of minimizing the costs of slippage and maintaining baseline sampling without substantial compromise. © 2017 Society for Risk Analysis.
Public Opinion Polls, Chicken Soup and Sample Size
ERIC Educational Resources Information Center
Nguyen, Phung
2005-01-01
Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.
A Bayesian predictive two-stage design for phase II clinical trials.
Sambucini, Valeria
2008-04-15
In this paper, we propose a Bayesian two-stage design for phase II clinical trials, which represents a predictive version of the single threshold design (STD) recently introduced by Tan and Machin. The STD two-stage sample sizes are determined specifying a minimum threshold for the posterior probability that the true response rate exceeds a pre-specified target value and assuming that the observed response rate is slightly higher than the target. Unlike the STD, we do not refer to a fixed experimental outcome, but take into account the uncertainty about future data. In both stages, the design aims to control the probability of getting a large posterior probability that the true response rate exceeds the target value. Such a probability is expressed in terms of prior predictive distributions of the data. The performance of the design is based on the distinction between analysis and design priors, recently introduced in the literature. The properties of the method are studied when all the design parameters vary.
Sgaier, Sema K; Eletskaya, Maria; Engl, Elisabeth; Mugurungi, Owen; Tambatamba, Bushimbwa; Ncube, Gertrude; Xaba, Sinokuthemba; Nanga, Alice; Gogolina, Svetlana; Odawo, Patrick; Gumede-Moyo, Sehlulekile; Kretschmer, Steve
2017-09-13
Public health programs are starting to recognize the need to move beyond a one-size-fits-all approach in demand generation, and instead tailor interventions to the heterogeneity underlying human decision making. Currently, however, there is a lack of methods to enable such targeting. We describe a novel hybrid behavioral-psychographic segmentation approach to segment stakeholders on potential barriers to a target behavior. We then apply the method in a case study of demand generation for voluntary medical male circumcision (VMMC) among 15-29 year-old males in Zambia and Zimbabwe. Canonical correlations and hierarchical clustering techniques were applied on representative samples of men in each country who were differentiated by their underlying reasons for their propensity to get circumcised. We characterized six distinct segments of men in Zimbabwe, and seven segments in Zambia, according to their needs, perceptions, attitudes and behaviors towards VMMC, thus highlighting distinct reasons for a failure to engage in the desired behavior.
NASA Astrophysics Data System (ADS)
Blancquaert, Yoann; Dezauzier, Christophe; Depre, Jerome; Miqyass, Mohamed; Beltman, Jan
2013-04-01
Continued tightening of overlay control budget in semiconductor lithography drives the need for improved metrology capabilities. Aggressive improvements are needed for overlay metrology speed, accuracy and precision. This paper is dealing with the on product metrology results of a scatterometry based platform showing excellent production results on resolution, precision, and tool matching for overlay. We will demonstrate point to point matching between tool generations as well as between target sizes and types. Nowadays, for the advanced process nodes a lot of information is needed (Higher order process correction, Reticle fingerprint, wafer edge effects) to quantify process overlay. For that purpose various overlay sampling schemes are evaluated: ultra- dense, dense and production type. We will show DBO results from multiple target type and shape for on product overlay control for current and future node down to at least 14 nm node. As overlay requirements drive metrology needs, we will evaluate if the new metrology platform meets the overlay requirements.
Eletskaya, Maria; Engl, Elisabeth; Mugurungi, Owen; Tambatamba, Bushimbwa; Ncube, Gertrude; Xaba, Sinokuthemba; Nanga, Alice; Gogolina, Svetlana; Odawo, Patrick; Gumede-Moyo, Sehlulekile; Kretschmer, Steve
2017-01-01
Public health programs are starting to recognize the need to move beyond a one-size-fits-all approach in demand generation, and instead tailor interventions to the heterogeneity underlying human decision making. Currently, however, there is a lack of methods to enable such targeting. We describe a novel hybrid behavioral-psychographic segmentation approach to segment stakeholders on potential barriers to a target behavior. We then apply the method in a case study of demand generation for voluntary medical male circumcision (VMMC) among 15–29 year-old males in Zambia and Zimbabwe. Canonical correlations and hierarchical clustering techniques were applied on representative samples of men in each country who were differentiated by their underlying reasons for their propensity to get circumcised. We characterized six distinct segments of men in Zimbabwe, and seven segments in Zambia, according to their needs, perceptions, attitudes and behaviors towards VMMC, thus highlighting distinct reasons for a failure to engage in the desired behavior. PMID:28901285
NASA Astrophysics Data System (ADS)
Drmosh, Qasem Ahmed Qasem
Pulsed laser ablation technique was applied for synthesize of ZnO, ZnO 2 and SnO2 nanostructure using metallic target in different liquids. For this purpose, a laser emitting pulsed UV radiations generated by the third harmonic of Nd:YAG (λ= 355 nm) was applied. For the synthesis of ZnO nanoparticles (NPs), a high-purity metallic plate of Zn was fixed at the bottom of a glass cell in the presence of deionized water and was irradiated at different laser energies (80- 100- 120) mJ per pulse. The average sizes and lattice parameters of ZnO produced by this method were estimated by X-ray diffraction (XRD). ZnO nanoparticles were also produced by ablation of zinc target in the presence of deionized water mixed with two types of surfactants: cetyltrimethyl ammonium bromide (CTAB) and octaethylene glycol monododecyl (OGM). The results showed that the average grain sizes decreased from 38 nm in the case of deionized water to 27 nm and 19 nm in CTAB and OGM respectively. The PL emission in CTAB and OGM showed two peaks: the sharp UV emission at 380 nm and a broad visible peak ranging from 450 nm to 600 nm. Zinc peroxide (ZnO2) nanoparticles having grain size less than 5 nm were also synthesized using pulsed laser ablation in aqueous solution in the presence of different surfactants and solid zinc target in 3 % hydrogen peroxide H2O2 for the first time. The effect of surfactants on the optical and structure of ZnO2 was studied by applying different spectroscopic techniques. The presence of the cubic phase of zinc peroxide in all samples was confirmed with XRD, and the grain sizes were 4.7 nm, 3.7 nm, 3.3 nm and 2.8 nm in pure H2O2; and H2O 2 mixed with SDS, CTAB and OGM respectively. For optical characterization, FTIR transmittance spectra of ZnO2 nanoparticles prepared with and without surfactants showed characteristic peaks of ZnO2 absorption at 435-445 cm-1. FTIR spectrum also revealed that the adsorbed surfactants on zinc peroxide disappeared in case of CTAB and OGM while it appears in case of SDS. Both FTIR and UV-Vis spectra showed a red shift in the presence of SDS and blue shift in presence of CTAB and OGM. The effect of post annealing temperature on dry ZnO2 nanoparticles prepared by PLA technique of solid zinc target in 3% H2O2 was studied by variation of the annealing temperatures from 100 to 600 °C for 8 hours under 1 atmospheric pressure. The XRD showed the phase transition from ZnO2 to ZnO at 200 °C. Based on XRD data, both the average grain size and lattice parameters of ZnO increased by post annealing of ZnO2 higher than 200 °C. In contrast, the band gap of ZnO nanoparticles decreased when the annealing temperature increased. The average sizes were 5, 6, 9, 15 and 19 nm at 200, 300, 400, 500 and 600 °C respectively. The PL emission spectra for ZnO showed strong UV emission peaks in all samples. In addition, the UV emission peaks were shifted to longer wavelength (red shifting) as the annealing temperature increase from 200 to 600 °C. From the above findings, we concluded that the grain size, lattice parameters, PL and band gap were size dependent as predicted by theoretical studies. (Abstract shortened by UMI.).
Sobel, Kenith V; Puri, Amrita M; Faulkenberry, Thomas J; Dague, Taylor D
2017-03-01
The size congruity effect refers to the interaction between numerical magnitude and physical digit size in a symbolic comparison task. Though this effect is well established in the typical 2-item scenario, the mechanisms at the root of the interference remain unclear. Two competing explanations have emerged in the literature: an early interaction model and a late interaction model. In the present study, we used visual conjunction search to test competing predictions from these 2 models. Participants searched for targets that were defined by a conjunction of physical and numerical size. Some distractors shared the target's physical size, and the remaining distractors shared the target's numerical size. We held the total number of search items fixed and manipulated the ratio of the 2 distractor set sizes. The results from 3 experiments converge on the conclusion that numerical magnitude is not a guiding feature for visual search, and that physical and numerical magnitude are processed independently, which supports a late interaction model of the size congruity effect. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Iwazawa, J; Ohue, S; Hashimoto, N; Mitani, T
2014-02-01
To compare the accuracy of computer software analysis using three different target-definition protocols to detect tumour feeder vessels for transarterial chemoembolization of hepatocellular carcinoma. C-arm computed tomography (CT) data were analysed for 81 tumours from 57 patients who had undergone chemoembolization using software-assisted detection of tumour feeders. Small, medium, and large-sized targets were manually defined for each tumour. The tumour feeder was verified when the target tumour was enhanced on selective C-arm CT of the investigated vessel during chemoembolization. The sensitivity, specificity, and accuracy of the three protocols were evaluated and compared. One hundred and eight feeder vessels supplying 81 lesions were detected. The sensitivity of the small, medium, and large target protocols was 79.8%, 91.7%, and 96.3%, respectively; specificity was 95%, 88%, and 50%, respectively; and accuracy was 87.5%, 89.9%, and 74%, respectively. The sensitivity was significantly higher for the medium (p = 0.003) and large (p < 0.001) target protocols than for the small target protocol. The specificity and accuracy were higher for the small (p < 0.001 and p < 0.001, respectively) and medium (p < 0.001 and p < 0.001, respectively) target protocols than for the large target protocol. The overall accuracy of software-assisted automated feeder analysis in transarterial chemoembolization for hepatocellular carcinoma is affected by the target definition size. A large target definition increases sensitivity and decreases specificity in detecting tumour feeders. A target size equivalent to the tumour size most accurately predicts tumour feeders. Copyright © 2013 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Bottom-up and top-down attentional contributions to the size congruity effect.
Sobel, Kenith V; Puri, Amrita M; Faulkenberry, Thomas J
2016-07-01
The size congruity effect refers to the interaction between the numerical and physical (i.e., font) sizes of digits in a numerical (or physical) magnitude selection task. Although various accounts of the size congruity effect have attributed this interaction to either an early representational stage or a late decision stage, only Risko, Maloney, and Fugelsang (Attention, Perception, & Psychophysics, 75, 1137-1147, 2013) have asserted a central role for attention. In the present study, we used a visual search paradigm to further study the role of attention in the size congruity effect. In Experiments 1 and 2, we showed that manipulating top-down attention (via the task instructions) had a significant impact on the size congruity effect. The interaction between numerical and physical size was larger for numerical size comparison (Exp. 1) than for physical size comparison (Exp. 2). In the remaining experiments, we boosted the feature salience by using a unique target color (Exp. 3) or by increasing the display density by using three-digit numerals (Exps. 4 and 5). As expected, a color singleton target abolished the size congruity effect. Searching for three-digit targets based on numerical size (Exp. 4) resulted in a large size congruity effect, but search based on physical size (Exp. 5) abolished the effect. Our results reveal a substantial role for top-down attention in the size congruity effect, which we interpreted as support for a shared-decision account.
Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie
2013-08-01
The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.
Granule size control and targeting in pulsed spray fluid bed granulation.
Ehlers, Henrik; Liu, Anchang; Räikkönen, Heikki; Hatara, Juha; Antikainen, Osmo; Airaksinen, Sari; Heinämäki, Jyrki; Lou, Honxiang; Yliruusi, Jouko
2009-07-30
The primary aim of the study was to investigate the effects of pulsed liquid feed on granule size. The secondary aim was to increase knowledge of this technique in granule size targeting. Pulsed liquid feed refers to the pump changing between on- and off-positions in sequences, called duty cycles. One duty cycle consists of one on- and off-period. The study was performed with a laboratory-scale top-spray fluid bed granulator with duty cycle length and atomization pressure as studied variables. The liquid feed rate, amount and inlet air temperature were constant. The granules were small, indicating that the powder has only undergone ordered mixing, nucleation and early growth. The effect of atomizing pressure on granule size depends on inlet air relative humidity, with premature binder evaporation as a reason. The duty cycle length was of critical importance to the end product attributes, by defining the extent of intermittent drying and rewetting. By varying only the duty cycle length, it was possible to control granule nucleation and growth, with a wider granule size target range in increased relative humidity. The present study confirms that pulsed liquid feed in fluid bed granulation is a useful tool in end product particle size targeting.
Multiple pinhole collimator based X-ray luminescence computed tomography
Zhang, Wei; Zhu, Dianwen; Lun, Michael; Li, Changqing
2016-01-01
X-ray luminescence computed tomography (XLCT) is an emerging hybrid imaging modality, which is able to improve the spatial resolution of optical imaging to hundreds of micrometers for deep targets by using superfine X-ray pencil beams. However, due to the low X-ray photon utilization efficiency in a single pinhole collimator based XLCT, it takes a long time to acquire measurement data. Herein, we propose a multiple pinhole collimator based XLCT, in which multiple X-ray beams are generated to scan a sample at multiple positions simultaneously. Compared with the single pinhole based XLCT, the multiple X-ray beam scanning method requires much less measurement time. Numerical simulations and phantom experiments have been performed to demonstrate the feasibility of the multiple X-ray beam scanning method. In one numerical simulation, we used four X-ray beams to scan a cylindrical object with 6 deeply embedded targets. With measurements from 6 angular projections, all 6 targets have been reconstructed successfully. In the phantom experiment, we generated two X-ray pencil beams with a collimator manufactured in-house. Two capillary targets with 0.6 mm edge-to-edge distance embedded in a cylindrical phantom have been reconstructed successfully. With the two beam scanning, we reduced the data acquisition time by 50%. From the reconstructed XLCT images, we found that the Dice similarity of targets is 85.11% and the distance error between two targets is less than 3%. We have measured the radiation dose during XLCT scan and found that the radiation dose, 1.475 mSv, is in the range of a typical CT scan. We have measured the changes of the collimated X-ray beam size and intensity at different distances from the collimator. We have also studied the effects of beam size and intensity in the reconstruction of XLCT. PMID:27446686
Reweighting anthropometric data using a nearest neighbour approach.
Kumar, Kannan Anil; Parkinson, Matthew B
2018-07-01
When designing products and environments, detailed data on body size and shape are seldom available for the specific user population. One way to mitigate this issue is to reweight available data such that they provide an accurate estimate of the target population of interest. This is done by assigning a statistical weight to each individual in the reference data, increasing or decreasing their influence on statistical models of the whole. This paper presents a new approach to reweighting these data. Instead of stratified sampling, the proposed method uses a clustering algorithm to identify relationships between the detailed and reference populations using their height, mass, and body mass index (BMI). The newly weighted data are shown to provide more accurate estimates than traditional approaches. The improved accuracy that accompanies this method provides designers with an alternative to data synthesis techniques as they seek appropriate data to guide their design practice.Practitioner Summary: Design practice is best guided by data on body size and shape that accurately represents the target user population. This research presents an alternative to data synthesis (e.g. regression or proportionality constants) for adapting data from one population for use in modelling another.
Photonic Low Cost Micro-Sensor for in-Line Wear Particle Detection in Flowing Lube Oils.
Mabe, Jon; Zubia, Joseba; Gorritxategi, Eneko
2017-03-14
The presence of microscopic particles in suspension in industrial fluids is often an early warning of latent or imminent failures in the equipment or processes where they are being used. This manuscript describes work undertaken to integrate different photonic principles with a micro- mechanical fluidic structure and an embedded processor to develop a fully autonomous wear debris sensor for in-line monitoring of industrial fluids. Lens-less microscopy, stroboscopic illumination, a CMOS imager and embedded machine vision technologies have been merged to develop a sensor solution that is able to detect and quantify the number and size of micrometric particles suspended in a continuous flow of a fluid. A laboratory test-bench has been arranged for setting up the configuration of the optical components targeting a static oil sample and then a sensor prototype has been developed for migrating the measurement principles to real conditions in terms of operating pressure and flow rate of the oil. Imaging performance is quantified using micro calibrated samples, as well as by measuring real used lubricated oils. Sampling a large fluid volume with a decent 2D spatial resolution, this photonic micro sensor offers a powerful tool at very low cost and compacted size for in-line wear debris monitoring.
Photonic Low Cost Micro-Sensor for in-Line Wear Particle Detection in Flowing Lube Oils
Mabe, Jon; Zubia, Joseba; Gorritxategi, Eneko
2017-01-01
The presence of microscopic particles in suspension in industrial fluids is often an early warning of latent or imminent failures in the equipment or processes where they are being used. This manuscript describes work undertaken to integrate different photonic principles with a micro- mechanical fluidic structure and an embedded processor to develop a fully autonomous wear debris sensor for in-line monitoring of industrial fluids. Lens-less microscopy, stroboscopic illumination, a CMOS imager and embedded machine vision technologies have been merged to develop a sensor solution that is able to detect and quantify the number and size of micrometric particles suspended in a continuous flow of a fluid. A laboratory test-bench has been arranged for setting up the configuration of the optical components targeting a static oil sample and then a sensor prototype has been developed for migrating the measurement principles to real conditions in terms of operating pressure and flow rate of the oil. Imaging performance is quantified using micro calibrated samples, as well as by measuring real used lubricated oils. Sampling a large fluid volume with a decent 2D spatial resolution, this photonic micro sensor offers a powerful tool at very low cost and compacted size for in-line wear debris monitoring. PMID:28335436
Jiao, Jian; Fan, Yu; Zhang, Yan
2015-10-01
To measure levels of microRNA (miR)-21 and its target gene, programmed cell death 4 (PDCD4), in samples of human cutaneous malignant melanoma and normal non-malignant control skin. Relative levels of miR-21 and PDCD4 mRNA were measured using a quantitative real-time reverse transcription-polymerase chain reaction. Correlations between the levels of the two molecules and the clinicopathological characteristics of malignant melanoma were analysed. A total of 67 cases of human cutaneous malignant melanoma were analysed and compared with 67 samples of normal nonmalignant control skin. Compared with normal skin samples, the relative level of miR-21 was significantly higher and the relative level of PDCD4 mRNA was significantly lower in the melanoma specimens. A significant negative correlation between PDCD4 mRNA and miR-21 was demonstrated in malignant melanoma (r = -0.602). Elevated miR-21 and reduced PDCD4 mRNA levels were both significantly correlated with increased tumour size, a higher Clark classification level and the presence of lymph node metastases in malignant melanoma. These findings suggest that miR-21 and PDCD4 might be potential biomarkers for malignant melanoma and might provide treatment targets in the future. © The Author(s) 2015.
Photovoltaic Enhancement with Ferroelectric HfO2Embedded in the Structure of Solar Cells
NASA Astrophysics Data System (ADS)
Eskandari, Rahmatollah; Malkinski, Leszek
Enhancing total efficiency of the solar cells is focused on the improving one or all of the three main stages of the photovoltaic effect: absorption of the light, generation of the carriers and finally separation of the carriers. Ferroelectric photovoltaic designs target the last stage with large electric forces from polarized ferroelectric films that can be larger than band gap of the material and the built-in electric fields in semiconductor bipolar junctions. In this project we have fabricated very thin ferroelectric HfO2 films ( 10nm) doped with silicon using RF sputtering method. Doped HfO2 films were capped between two TiN layers ( 20nm) and annealed at temperatures of 800ºC and 1000ºC and Si content was varied between 6-10 mol. % using different size of mounted Si chip on hafnium target. Piezoforce microscopy (PFM) method proved clear ferroelectric properties in samples with 6 mol. % of Si that were annealed at 800ºC. Ferroelectric samples were poled in opposite directions and embedded in the structure of a cell and an enhancement in photovoltaic properties were observed on the poled samples vs unpoled ones with KPFM and I-V measurements. The current work is funded by the NSF EPSCoR LA-SiGMA project under award #EPS-1003897.
NASA Astrophysics Data System (ADS)
Taheri, H.; Koester, L.; Bigelow, T.; Bond, L. J.
2018-04-01
Industrial applications of additively manufactured components are increasing quickly. Adequate quality control of the parts is necessary in ensuring safety when using these materials. Base material properties, surface conditions, as well as location and size of defects are some of the main targets for nondestructive evaluation of additively manufactured parts, and the problem of adequate characterization is compounded given the challenges of complex part geometry. Numerical modeling can allow the interplay of the various factors to be studied, which can lead to improved measurement design. This paper presents a finite element simulation verified by experimental results of ultrasonic waves scattering from flat bottom holes (FBH) in additive manufacturing materials. A focused beam immersion ultrasound transducer was used for both the modeling and simulations in the additive manufactured samples. The samples were SS17 4 PH steel samples made by laser sintering in a powder bed.
A descriptive study of sexual homicide in Canada: implications for police investigation.
Beauregard, Eric; Martineau, Melissa
2013-12-01
Few empirical studies have been conducted that examine the phenomenon of sexual homicide, and among these studies, many have been limited by small sample size. Although interesting and informative, these studies may not be representative of the greater phenomenon of sexual murder and may be subject to sampling bias that could have significant effects on results. The current study aims to provide a descriptive analysis of the largest sample of sexual homicide cases across Canada in the past 62 years. In doing so, the study aims to examine offender and victim characteristics, victim targeting and access, and modus operandi. Findings show that cases of sexual homicide and sexual murderers included in the current study differ in many aspects from the portrait of the sexual murderer and his or her crime depicted in previous studies. The authors' results may prove useful to the police officers responsible for the investigation of these crimes.
Lin, Run; Li, Yuancheng; MacDonald, Tobey; Wu, Hui; Provenzale, James; Peng, Xingui; Huang, Jing; Wang, Liya; Wang, Andrew Y; Yang, Jianyong; Mao, Hui
2017-02-01
Detecting circulating tumor cells (CTCs) with high sensitivity and specificity is critical to management of metastatic cancers. Although immuno-magnetic technology for in vitro detection of CTCs has shown promising potential for clinical applications, the biofouling effect, i.e., non-specific adhesion of biomolecules and non-cancerous cells in complex biological samples to the surface of a device/probe, can reduce the sensitivity and specificity of cell detection. Reported herein is the application of anti-biofouling polyethylene glycol-block-allyl glycidyl ether copolymer (PEG-b-AGE) coated iron oxide nanoparticles (IONPs) to improve the separation of targeted tumor cells from aqueous phase in an external magnetic field. PEG-b-AGE coated IONPs conjugated with transferrin (Tf) exhibited significant anti-biofouling properties against non-specific protein adsorption and off-target cell uptake, thus substantially enhancing the ability to target and separate transferrin receptor (TfR) over-expressed D556 medulloblastoma cells. Tf conjugated PEG-b-AGE coated IONPs exhibited a high capture rate of targeted tumor cells (D556 medulloblastoma cell) in cell media (58.7±6.4%) when separating 100 targeted tumor cells from 1×10 5 non-targeted cells and 41 targeted tumor cells from 100 D556 medulloblastoma cells spiked into 1mL blood. It is demonstrated that developed nanoparticle has higher efficiency in capturing targeted cells than widely used micron-sized particles (i.e., Dynabeads ® ). Copyright © 2016 Elsevier B.V. All rights reserved.
Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.
Rochon, K; Scoles, G A; Lysyk, T J
2012-03-01
A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.
Simple, Defensible Sample Sizes Based on Cost Efficiency
Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.
2009-01-01
Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055
RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.
Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu
2018-05-30
One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.
Attentional priorities and access to short-term memory: parietal interactions.
Gillebert, Céline R; Dyrholm, Mads; Vangkilde, Signe; Kyllingsbæk, Søren; Peeters, Ronald; Vandenberghe, Rik
2012-09-01
The intraparietal sulcus (IPS) has been implicated in selective attention as well as visual short-term memory (VSTM). To contrast mechanisms of target selection, distracter filtering, and access to VSTM, we combined behavioral testing, computational modeling and functional magnetic resonance imaging. Sixteen healthy subjects participated in a change detection task in which we manipulated both target and distracter set sizes. We directly compared the IPS response as a function of the number of targets and distracters in the display and in VSTM. When distracters were not present, the posterior and middle segments of IPS showed the predicted asymptotic activity increase with an increasing target set size. When distracters were added to a single target, activity also increased as predicted. However, the addition of distracters to multiple targets suppressed both middle and posterior IPS activities, thereby displaying a significant interaction between the two factors. The interaction between target and distracter set size in IPS could not be accounted for by a simple explanation in terms of number of items accessing VSTM. Instead, it led us to a model where items accessing VSTM receive differential weights depending on their behavioral relevance, and secondly, a suppressive effect originates during the selection phase when multiple targets and multiple distracters are simultaneously present. The reverse interaction between target and distracter set size was significant in the right temporoparietal junction (TPJ), where activity was highest for a single target compared to any other condition. Our study reconciles the role of middle IPS in attentional selection and biased competition with its role in VSTM access. Copyright © 2012 Elsevier Inc. All rights reserved.
Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu
2015-07-01
Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.
Reporting of sample size calculations in analgesic clinical trials: ACTTION systematic review.
McKeown, Andrew; Gewandter, Jennifer S; McDermott, Michael P; Pawlowski, Joseph R; Poli, Joseph J; Rothstein, Daniel; Farrar, John T; Gilron, Ian; Katz, Nathaniel P; Lin, Allison H; Rappaport, Bob A; Rowbotham, Michael C; Turk, Dennis C; Dworkin, Robert H; Smith, Shannon M
2015-03-01
Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size. Copyright © 2015 American Pain Society. All rights reserved.
The contents of visual working memory reduce uncertainty during visual search.
Cosman, Joshua D; Vecera, Shaun P
2011-05-01
Information held in visual working memory (VWM) influences the allocation of attention during visual search, with targets matching the contents of VWM receiving processing benefits over those that do not. Such an effect could arise from multiple mechanisms: First, it is possible that the contents of working memory enhance the perceptual representation of the target. Alternatively, it is possible that when a target is presented among distractor items, the contents of working memory operate postperceptually to reduce uncertainty about the location of the target. In both cases, a match between the contents of VWM and the target should lead to facilitated processing. However, each effect makes distinct predictions regarding set-size manipulations; whereas perceptual enhancement accounts predict processing benefits regardless of set size, uncertainty reduction accounts predict benefits only with set sizes larger than 1, when there is uncertainty regarding the target location. In the present study, in which briefly presented, masked targets were presented in isolation, there was a negligible effect of the information held in VWM on target discrimination. However, in displays containing multiple masked items, information held in VWM strongly affected target discrimination. These results argue that working memory representations act at a postperceptual level to reduce uncertainty during visual search.
Electrophysiological evidence for size invariance in masked picture repetition priming
Eddy, Marianna D.; Holcomb, Phillip J.
2009-01-01
This experiment examined invariance in object representations through measuring event-related potentials (ERPs) to pictures in a masked repetition priming paradigm. Pairs of pictures were presented where the prime was either the same size or half the size of the target object and the target was either presented in a normal orientation or was a normal sized mirror reflection of the prime object. Previous masked repetition priming studies have found a cascade of priming effect sensitive to perceptual (N190/P190) and semantic (N400) properties of the stimulus. This experiment found that both early (N190/P190 effects) and later effects (N400) were invariant to size, whereas only the N190/P190 effect was invariant to mirror reflection. The combination of a small prime and a mirror reflected target led to no significant priming effects. Taken together, the results of this set of experiments suggests that object recognition, more specifically, activating an object representation, occurs in a hierarchical fashion where overlapping perceptual information between the prime and target is necessary, although not always sufficient, to activate a higher level semantic representation. PMID:19560248
Bauer, Gerald; Neouze, Marie-Alexandra; Limbeck, Andreas
2013-01-15
A novel sample pre-treatment method for multi trace element enrichment from environmental waters prior to optical emission spectrometry analysis with inductively coupled plasma (ICP-OES) is proposed, based on dispersed particle extraction (DPE). This method is based on the use of silica nanoparticles functionalized with strong cation exchange ligands. After separation from the investigated sample solution, the nanoparticles used for the extraction are directly introduced in the ICP for measurement of the adsorbed target analytes. A prerequisite for the successful application of the developed slurry approach is the use of sorbent particles with a mean size of 500 nm instead of commercially available μm sized beads. The proposed method offers the known advantages of common bead-injection (BI) techniques, and further circumvents the elution step required in conventional solid phase extraction procedures. With the use of 14.4 mL sample and addition of ammonium acetate buffer and particle slurry limits of detection (LODs) from 0.03 μg L(-1) for Be to 0.48 μg L(-1) for Fe, with relative standard deviations ranging from 1.7% for Fe and 5.5% for Cr and an average enrichment factor of 10.4 could be achieved. By implementing this method the possibility to access sorbent materials with irreversible bonding mechanisms for sample pre-treatment is established, thus improvements in the selectivity of sample pre-treatment procedures can be achieved. The presented procedure was tested for accuracy with NIST standard reference material 1643e (fresh water) and was applied to drinking water samples from the vicinity of Vienna. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Prasad, S.; Bruce, L. M.
2007-04-01
There is a growing interest in using multiple sources for automatic target recognition (ATR) applications. One approach is to take multiple, independent observations of a phenomenon and perform a feature level or a decision level fusion for ATR. This paper proposes a method to utilize these types of multi-source fusion techniques to exploit hyperspectral data when only a small number of training pixels are available. Conventional hyperspectral image based ATR techniques project the high dimensional reflectance signature onto a lower dimensional subspace using techniques such as Principal Components Analysis (PCA), Fisher's linear discriminant analysis (LDA), subspace LDA and stepwise LDA. While some of these techniques attempt to solve the curse of dimensionality, or small sample size problem, these are not necessarily optimal projections. In this paper, we present a divide and conquer approach to address the small sample size problem. The hyperspectral space is partitioned into contiguous subspaces such that the discriminative information within each subspace is maximized, and the statistical dependence between subspaces is minimized. We then treat each subspace as a separate source in a multi-source multi-classifier setup and test various decision fusion schemes to determine their efficacy. Unlike previous approaches which use correlation between variables for band grouping, we study the efficacy of higher order statistical information (using average mutual information) for a bottom up band grouping. We also propose a confidence measure based decision fusion technique, where the weights associated with various classifiers are based on their confidence in recognizing the training data. To this end, training accuracies of all classifiers are used for weight assignment in the fusion process of test pixels. The proposed methods are tested using hyperspectral data with known ground truth, such that the efficacy can be quantitatively measured in terms of target recognition accuracies.
Medvedovici, Andrei; Udrescu, Stefan; Albu, Florin; Tache, Florentin; David, Victor
2011-09-01
Liquid-liquid extraction of target compounds from biological matrices followed by the injection of a large volume from the organic layer into the chromatographic column operated under reversed-phase (RP) conditions would successfully combine the selectivity and the straightforward character of the procedure in order to enhance sensitivity, compared with the usual approach of involving solvent evaporation and residue re-dissolution. Large-volume injection of samples in diluents that are not miscible with the mobile phase was recently introduced in chromatographic practice. The risk of random errors produced during the manipulation of samples is also substantially reduced. A bioanalytical method designed for the bioequivalence of fenspiride containing pharmaceutical formulations was based on a sample preparation procedure involving extraction of the target analyte and the internal standard (trimetazidine) from alkalinized plasma samples in 1-octanol. A volume of 75 µl from the octanol layer was directly injected on a Zorbax SB C18 Rapid Resolution, 50 mm length × 4.6 mm internal diameter × 1.8 µm particle size column, with the RP separation being carried out under gradient elution conditions. Detection was made through positive ESI and MS/MS. Aspects related to method development and validation are discussed. The bioanalytical method was successfully applied to assess bioequivalence of a modified release pharmaceutical formulation containing 80 mg fenspiride hydrochloride during two different studies carried out as single-dose administration under fasting and fed conditions (four arms), and multiple doses administration, respectively. The quality attributes assigned to the bioanalytical method, as resulting from its application to the bioequivalence studies, are highlighted and fully demonstrate that sample preparation based on large-volume injection of immiscible diluents has an increased potential for application in bioanalysis.
THE zCOSMOS-SINFONI PROJECT. I. SAMPLE SELECTION AND NATURAL-SEEING OBSERVATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mancini, C.; Renzini, A.; Foerster Schreiber, N. M.
2011-12-10
The zCOSMOS-SINFONI project is aimed at studying the physical and kinematical properties of a sample of massive z {approx} 1.4-2.5 star-forming galaxies, through SINFONI near-infrared integral field spectroscopy (IFS), combined with the multiwavelength information from the zCOSMOS (COSMOS) survey. The project is based on one hour of natural-seeing observations per target, and adaptive optics (AO) follow-up for a major part of the sample, which includes 30 galaxies selected from the zCOSMOS/VIMOS spectroscopic survey. This first paper presents the sample selection, and the global physical characterization of the target galaxies from multicolor photometry, i.e., star formation rate (SFR), stellar mass, age,more » etc. The H{alpha} integrated properties, such as, flux, velocity dispersion, and size, are derived from the natural-seeing observations, while the follow-up AO observations will be presented in the next paper of this series. Our sample appears to be well representative of star-forming galaxies at z {approx} 2, covering a wide range in mass and SFR. The H{alpha} integrated properties of the 25 H{alpha} detected galaxies are similar to those of other IFS samples at the same redshifts. Good agreement is found among the SFRs derived from H{alpha} luminosity and other diagnostic methods, provided the extinction affecting the H{alpha} luminosity is about twice that affecting the continuum. A preliminary kinematic analysis, based on the maximum observed velocity difference across the source and on the integrated velocity dispersion, indicates that the sample splits nearly 50-50 into rotation-dominated and velocity-dispersion-dominated galaxies, in good agreement with previous surveys.« less
Spatial coding of object typical size: evidence for a SNARC-like effect.
Sellaro, Roberta; Treccani, Barbara; Job, Remo; Cubelli, Roberto
2015-11-01
The present study aimed to assess whether the representation of the typical size of objects can interact with response position codes in two-choice bimanual tasks, and give rise to a SNARC-like effect (faster responses when the representation of the typical size of the object to which the target stimulus refers corresponds to response side). Participants performed either a magnitude comparison task (in which they were required to judge whether the target was smaller or larger than a reference stimulus; Experiment 1) or a semantic decision task (in which they had to classify the target as belonging to either the category of living or non-living entities; Experiment 2). Target stimuli were pictures or written words referring to either typically large and small animals or inanimate objects. In both tasks, participants responded by pressing a left- or right-side button. Results showed that, regardless of the to-be-performed task (magnitude comparison or semantic decision) and stimulus format (picture or word), left responses were faster when the target represented typically small-sized entities, whereas right responses were faster for typically large-sized entities. These results provide evidence that the information about the typical size of objects is activated even if it is not requested by the task, and are consistent with the idea that objects' typical size is automatically spatially coded, as has been proposed to occur for number magnitudes. In this representation, small objects would be on the left and large objects would be on the right. Alternative interpretations of these results are also discussed.
Allocentric information is used for memory-guided reaching in depth: A virtual reality study.
Klinghammer, Mathias; Schütz, Immo; Blohm, Gunnar; Fiehler, Katja
2016-12-01
Previous research has demonstrated that humans use allocentric information when reaching to remembered visual targets, but most of the studies are limited to 2D space. Here, we study allocentric coding of memorized reach targets in 3D virtual reality. In particular, we investigated the use of allocentric information for memory-guided reaching in depth and the role of binocular and monocular (object size) depth cues for coding object locations in 3D space. To this end, we presented a scene with objects on a table which were located at different distances from the observer and served as reach targets or allocentric cues. After free visual exploration of this scene and a short delay the scene reappeared, but with one object missing (=reach target). In addition, the remaining objects were shifted horizontally or in depth. When objects were shifted in depth, we also independently manipulated object size by either magnifying or reducing their size. After the scene vanished, participants reached to the remembered target location on the blank table. Reaching endpoints deviated systematically in the direction of object shifts, similar to our previous results from 2D presentations. This deviation was stronger for object shifts in depth than in the horizontal plane and independent of observer-target-distance. Reaching endpoints systematically varied with changes in object size. Our results suggest that allocentric information is used for coding targets for memory-guided reaching in depth. Thereby, retinal disparity and vergence as well as object size provide important binocular and monocular depth cues. Copyright © 2016 Elsevier Ltd. All rights reserved.
Biopsy of Liver Target Lesions under Contrast-Enhanced Ultrasound Guidance - A Multi-Center Study.
Francica, Giampiero; Meloni, Maria Franca; de Sio, Ilario; Terracciano, Fulvia; Caturelli, Eugenio; Riccardi, Laura; Roselli, Paola; Iadevaia, Maddalena Diana; Scaglione, Mariano; Lenna, Giovanni; Chiang, Jason; Pompili, Maurizio
2017-12-12
Purpose To retrospectively characterize the prevalence and impact of contrast-enhanced ultrasound (CEUS) as a guidance technique for the biopsy of liver target lesions (LTLs) at six interventional ultrasound centers. Materials and Methods The six participating centers retrospectively selected all patients in whom biopsy needles were positioned in LTLs during CEUS. The prevalence of CEUS-guided biopsies at each center between 2005 and 2016, contrast agent consumption, procedure indications, diagnostic yield and complications were assessed. Informed consent was obtained for all patients. Results CEUS-guided biopsy of LTLs was carried out in 103 patients (68 M/35 F, median age: 69 yrs) with 103 liver target lesions (median size: 20 mm) using cutting needles (18 - 20 g) in 94 cases (91.2 %). CEUS-guided biopsy represented 2.6 % (range: 0.8 - 7.7 %) of 3818 biopsies on LTLs carried out at the participating centers. Indications to CEUS-guided biopsy were: a target lesion not visible on non-enhanced US (27.2 %), improvement of conspicuity of the target (33 %), choice of non-necrotic area inside the target (39.8 %). 26 patients (25.2 %) had a previously non-diagnostic cyto-histological exam. The diagnostic accuracy of the technique was 99 %. No major complications followed infusion of contrast agent or biopsy performance. Conclusion The indications for CEUS-guided biopsy for LTLs are limited, but CEUS can be useful in challenging clinical scenarios, e. g. poorly visualized or invisible lesions or sampling of non-necrotic areas in the target lesions. There is also a potential advantage in using CEUS to guide repeat biopsies after unsuccessful sampling performed using the standard ultrasound technique. © Georg Thieme Verlag KG Stuttgart · New York.
Determination of the optimal sample size for a clinical trial accounting for the population size.
Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin
2017-07-01
The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Price promotions for food and beverage products in a nationwide sample of food stores.
Powell, Lisa M; Kumanyika, Shiriki K; Isgor, Zeynep; Rimkus, Leah; Zenk, Shannon N; Chaloupka, Frank J
2016-05-01
Food and beverage price promotions may be potential targets for public health initiatives but have not been well documented. We assessed prevalence and patterns of price promotions for food and beverage products in a nationwide sample of food stores by store type, product package size, and product healthfulness. We also assessed associations of price promotions with community characteristics and product prices. In-store data collected in 2010-2012 from 8959 food stores in 468 communities spanning 46 U.S. states were used. Differences in the prevalence of price promotions were tested across stores types, product varieties, and product package sizes. Multivariable regression analyses examined associations of presence of price promotions with community racial/ethnic and socioeconomic characteristics and with product prices. The prevalence of price promotions across all 44 products sampled was, on average, 13.4% in supermarkets (ranging from 9.1% for fresh fruits and vegetables to 18.2% for sugar-sweetened beverages), 4.5% in grocery stores (ranging from 2.5% for milk to 6.6% for breads and cereals), and 2.6% in limited service stores (ranging from 1.2% for fresh fruits and vegetables to 4.1% for breads and cereals). No differences were observed by community characteristics. Less-healthy versus more-healthy product varieties and larger versus smaller product package sizes generally had a higher prevalence of price promotion, particularly in supermarkets. On average, in supermarkets, price promotions were associated with 15.2% lower prices. The observed patterns of price promotions warrant more attention in public health food environment research and intervention. Copyright © 2016 Elsevier Inc. All rights reserved.
He, Hongying; Cai, Chunyan; Charnsangavej, Chusilp; Theriault, Richard L; Green, Marjorie; Quraishi, Mohammad A; Yang, Wei T
2015-11-01
To evaluate change in size vs computed tomography (CT) density of hepatic metastases in breast cancer patients before and after cytotoxic chemotherapy or targeted therapy. A database search in a single institution identified 48 breast cancer patients who had hepatic metastases treated with either cytotoxic chemotherapy alone or targeted therapy alone, and who had contrast-enhanced CT (CECT) scans of the abdomen at baseline and within 4 months of initiation of therapy in the past 10 years. Two radiologists retrospectively evaluated CT scans and identified up to 2 index lesions in each patient. The size (centimeters) of each lesion was measured according to Response Evaluation Criteria in Solid Tumors (RECIST) criteria, and CT density (Hounsfield units) was measured by drawing a region of interest around the margin of the entire lesion. The percent change in sum of lesion size and mean CT density on pre- and post-treatment scans was computed for each patient; results were compared within each treatment group. Thirty-nine patients with 68 lesions received cytotoxic chemotherapy only; 9 patients with 15 lesions received targeted therapy only. The mean percent changes in sum of lesion size and mean CT density were statistically significant within the cytotoxic chemotherapy group before and after treatment, but not significant in the targeted therapy group. The patients in the targeted therapy group tend to have better 2-year survival. The patients who survived at 2 years tend to have more decrease in tumour size in the cytotoxic chemotherapy group. Cytotoxic chemotherapy produced significant mean percent decrease in tumour size and mean CT density of hepatic metastases from breast cancer before and after treatment, whereas targeted therapy did not. Nonetheless, there is a trend that the patients in the targeted therapy group had better 2-year survival rate. This suggests that RECIST is potentially inadequate in evaluating tumour response in breast cancer liver metastases treated with targeted therapy alone, calling for an alternative marker for response evaluation in this subset of patients. Copyright © 2015 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.
Md Yusof, Md Yuzaiful; Vital, Edward M J; Emery, Paul
2013-08-01
B cells play a central role in the pathogenesis of systemic lupus erythematosus and anti-neutrophil cytoplasmic antibody-associated vasculitis. There are various strategies for targeting B cells including depletion, inhibition of survival factors, activation and inhibition of co-stimulatory molecules. Controlled trials in systemic lupus erythematosus have shown positive results for belimumab, promising results for epratuzumab and negative results for rituximab. The failure of rituximab in controlled trials has been attributed to trial design, sample size and outcome measures rather than true inefficacy. In anti-neutrophil cytoplasmic antibody-associated vasculitis, rituximab is effective for remission induction and in relapsing disease. However, the optimal long-term re-treatment strategy remains to be determined. Over the next 5 years, evidence will be available regarding the clinical efficacy of these novel therapies, biomarkers and their long-term safety.
Transgender HIV prevention: implementation and evaluation of a workshop.
Bockting, W O; Rosser, B R; Scheltema, K
1999-04-01
Virtually no HIV prevention education has specifically targeted the transgender community. To fill this void, a transgender HIV prevention workshop was developed, implemented and evaluated. A 4 h workshop, grounded in the Health Belief Model and the Eroticizing Safer Sex approach, combined lectures, videos, a panel, discussion, roleplay and exercises. Evaluation using a pre-, post- and follow-up test design showed an increase in knowledge and an initial increase in positive attitudes that diminished over time. Due to the small sample size (N = 59) and limited frequency of risk behavior, a significant decrease in unsafe sexual or needle practices could not be demonstrated. However, findings suggested an increase in safer sexual behaviors such as (mutual) masturbation. Peer support improved significantly. Future prevention education should make special efforts to target the more difficult-to-reach, high-risk subgroups of the transgender population.
An abundance of rare functional variants in 202 drug target genes sequenced in 14,002 people
Nelson, Matthew R.; Wegmann, Daniel; Ehm, Margaret G.; Kessner, Darren; St. Jean, Pamela; Verzilli, Claudio; Shen, Judong; Tang, Zhengzheng; Bacanu, Silviu-Alin; Fraser, Dana; Warren, Liling; Aponte, Jennifer; Zawistowski, Matthew; Liu, Xiao; Zhang, Hao; Zhang, Yong; Li, Jun; Li, Yun; Li, Li; Woollard, Peter; Topp, Simon; Hall, Matthew D.; Nangle, Keith; Wang, Jun; Abecasis, Gonçalo; Cardon, Lon R.; Zöllner, Sebastian; Whittaker, John C.; Chissoe, Stephanie L.; Novembre, John; Mooser, Vincent
2015-01-01
Rare genetic variants contribute to complex disease risk; however, the abundance of rare variants in human populations remains unknown. We explored this spectrum of variation by sequencing 202 genes encoding drug targets in 14,002 individuals. We find rare variants are abundant (one every 17 bases) and geographically localized, such that even with large sample sizes, rare variant catalogs will be largely incomplete. We used the observed patterns of variation to estimate population growth parameters, the proportion of variants in a given frequency class that are putatively deleterious, and mutation rates for each gene. Overall we conclude that, due to rapid population growth and weak purifying selection, human populations harbor an abundance of rare variants, many of which are deleterious and have relevance to understanding disease risk. PMID:22604722
Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis
Adnan, Tassha Hilda
2016-01-01
Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446
Bridging the gap: a review of dose investigations in paediatric investigation plans
Hampson, Lisa V; Herold, Ralf; Posch, Martin; Saperia, Julia; Whitehead, Anne
2014-01-01
Aims In the EU, development of new medicines for children should follow a prospectively agreed paediatric investigation plan (PIP). Finding the right dose for children is crucial but challenging due to the variability of pharmacokinetics across age groups and the limited sample sizes available. We examined strategies adopted in PIPs to support paediatric dosing recommendations to identify common assumptions underlying dose investigations and the attempts planned to verify them in children. Methods We extracted data from 73 PIP opinions recently adopted by the Paediatric Committee of the European Medicines Agency. These opinions represented 79 medicinal development programmes and comprised a total of 97 dose investigation studies. We identified the design of these dose investigation studies, recorded the analyses planned and determined the criteria used to define target doses. Results Most dose investigation studies are clinical trials (83 of 97) that evaluate a single dosing rule. Sample sizes used to investigate dose are highly variable across programmes, with smaller numbers used in younger children (< 2 years). Many studies (40 of 97) do not pre-specify a target dose criterion. Of those that do, most (33 of 57 studies) guide decisions using pharmacokinetic data alone. Conclusions Common assumptions underlying dose investigation strategies include dose proportionality and similar exposure−response relationships in adults and children. Few development programmes pre-specify steps to verify assumptions in children. There is scope for the use of Bayesian methods as a framework for synthesizing existing information to quantify prior uncertainty about assumptions. This process can inform the design of optimal drug development strategies. PMID:24720849
Vilaplana, Francisco; Martínez-Sanz, Marta; Ribes-Greus, Amparo; Karlsson, Sigbritt
2010-01-15
The emission of low molecular weight compounds from recycled high-impact polystyrene (HIPS) has been investigated using headspace solid-phase microextraction (HS-SPME) and gas chromatography-mass spectrometry (GC-MS). Four released target analytes (styrene, benzaldehyde, acetophenone, and 2-phenylpropanal) were selected for the optimisation of the HS-SPME sampling procedure, by analysing operating parameters such as type of SPME fibre (polarity and operating mechanism), particle size, extraction temperature and time. 26 different compounds were identified to be released at different temperatures from recycled HIPS, including residues of polymerisation, oxidated derivates of styrene, and additives. The type of SPME fibre employed in the sampling procedure affected the detection of emitted components. An adsorptive fibre such as carbowax/polydimethylsiloxane (CAR/PDMS fibre) offered good selectivity for both non-polar and polar volatile compounds at lower temperatures; higher temperatures result in interferences from less-volatile released compounds. An absorptive fibre as polydimethylsiloxane (PDMS) fibre is suitable for the detection of less-volatile non-polar molecules at higher temperatures. The nature and relative amount of the emitted compounds increased with higher exposure temperature and smaller polymeric particle size. HS-SPME proves to be a suitable technique for screening the emission of semi-volatile organic compounds (SVOCs) from polymeric materials; reliable quantification of the content of target analytes in recycled HIPS is however difficult due to the complex mass-transfer processes involved, matrix effects, and the difficulties in equilibrating the analytical system. 2009 Elsevier B.V. All rights reserved.
Determining the sources of fine-grained sediment using the Sediment Source Assessment Tool (Sed_SAT)
Gorman Sanisaca, Lillian E.; Gellis, Allen C.; Lorenz, David L.
2017-07-27
A sound understanding of sources contributing to instream sediment flux in a watershed is important when developing total maximum daily load (TMDL) management strategies designed to reduce suspended sediment in streams. Sediment fingerprinting and sediment budget approaches are two techniques that, when used jointly, can qualify and quantify the major sources of sediment in a given watershed. The sediment fingerprinting approach uses trace element concentrations from samples in known potential source areas to determine a clear signature of each potential source. A mixing model is then used to determine the relative source contribution to the target suspended sediment samples.The computational steps required to apportion sediment for each target sample are quite involved and time intensive, a problem the Sediment Source Assessment Tool (Sed_SAT) addresses. Sed_SAT is a user-friendly statistical model that guides the user through the necessary steps in order to quantify the relative contributions of sediment sources in a given watershed. The model is written using the statistical software R (R Core Team, 2016b) and utilizes Microsoft Access® as a user interface but requires no prior knowledge of R or Microsoft Access® to successfully run the model successfully. Sed_SAT identifies outliers, corrects for differences in size and organic content in the source samples relative to the target samples, evaluates the conservative behavior of tracers used in fingerprinting by applying a “Bracket Test,” identifies tracers with the highest discriminatory power, and provides robust error analysis through a Monte Carlo simulation following the mixing model. Quantifying sediment source contributions using the sediment fingerprinting approach provides local, State, and Federal land management agencies with important information needed to implement effective strategies to reduce sediment. Sed_SAT is designed to assist these agencies in applying the sediment fingerprinting approach to quantify sediment sources in the sediment TMDL framework.
NASA Astrophysics Data System (ADS)
Madsen, M. B.; Drube, L.; Falkenberg, T. V.; Haspang, M. P.; Ellehoj, M.; Leer, K.; Olsen, L. D.; Goetz, W.; Hviid, S. F.; Gunnlaugsson, H. P.; Hecht, M. H.; Parrat, D.; Lemmon, M. T.; Morris, R. V.; Pike, T.; Sykulska, H.; Vijendran, S.; Britt, D.; Staufer, U.; Marshall, J.; Smith, P. H.
2008-12-01
Phoenix carries as part of its scientific payload a series of magnetic properties experiments designed to utilize onboard instruments for the investigation of airborne dust, air-fall samples stirred by the retro-rockets of the lander, and sampled surface and sub-surface material from the northern plains of Mars. One of the aims of these experiments on Phoenix is to investigate any possible differences between airborne dust and soils found on the northern plains from similar samples in the equatorial region of Mars. The magnetic properties experiments are designed to control the pattern of dust attracted to or accumulated on the surfaces to enable interpretation of these patterns in terms of certain magnetic properties of the dust forming the patterns. The Surface Stereo Imager (SSI) provides multi-spectral information about dust accumulated on three iSweep targets on the lander instrument deck. The iSweeps utilize built in permanent magnets and 6 different background colors for the dust compared to only 1 for the MER sweep magnet. Simultaneously these iSweep targets are used as in-situ radiometric calibration targets for the SSI. The visible/near-infrared spectra acquired so far are similar to typical Martian dust and soil spectra. Because of the multiple background colors of the iSweeps the effect of the translucence of thin dust layers can be estimated. High resolution images (4 micrometers/px) acquired by the Optical Microscope (OM) showed subtle differences between different soil samples in particle size distribution, color and morphology. Most samples contain (typically 50 micrometer) large, subrounded particles that are substantially magnetic. The colors of these particles range from red, brown to (almost) black. Based on results from the Mars Exploration Rovers, these dark particles are believed to be enriched in magnetite. Occasionally, also very bright, whitish particles were found on the magnet substrates, likely held by cohesion forces to the magnet surface and/or to other (magnetic) particles.
Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M
2018-06-01
Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.
The immune synapse clears and excludes molecules above a size threshold
Cartwright, Adam N. R.; Griggs, Jeremy; Davis, Daniel M.
2014-01-01
Natural killer cells assess target cell health via interactions at the immune synapse (IS) that facilitates signal integration and directed secretion. Here we test whether the IS also functions as a gasket. Quantitative fluorescence microscopy of nanometer-scale dextrans within synapses formed by various effector-target cell conjugates reveal that molecules are excluded in a size-dependent manner at activating synapses. Dextran sized ≤4 nm move in and out of the IS, but access is significantly reduced (by >50%) for dextran sized 10–13 nm, and dextran ≥32 nm is almost entirely excluded. Depolymerization of F-actin abrogated exclusion. Unexpectedly, larger-sized dextrans are cleared as the IS assembles in a zipper-like manner. Monoclonal antibodies are also excluded from the IS but smaller single-domain antibodies are able to penetrate. Therefore, the IS can clear and exclude molecules above a size threshold, and drugs designed to target synaptic cytokines or cytotoxic proteins must fit these dimensions. PMID:25407222
Hole-y Debris Disks, Batman! Where are the planets?
NASA Astrophysics Data System (ADS)
Bailey, V.; Meshkat, T.; Hinz, P.; Kenworthy, M.; Su, K. Y. L.
2014-03-01
Giant planets at wide separations are rare and direct imaging surveys are resource-intensive, so a cheaper marker for the presence of giant planets is desirable. One intriguing possibility is to use the effect of planets on their host stars' debris disks. Theoretical studies indicate giant planets can gravitationally carve sharp boundaries and gaps in their disks; this has been seen for HR 8799, β Pic, and tentatively for HD 95086 (Su et al. 2009, Lagrange et al. 2010, Moor et al. 2013). If more broadly demonstrated, this link could help guide target selection for next generation direct imaging surveys. Using Spitzer MIPS/IRS spectral energy distributions (SEDs), we identify several dozen systems with two-component and/or large inner cavity disks (aka Hole-y Debris Disks). With LBT/LBTI, VLT/NaCo, GeminiS/NICI, MMT/Clio and Magellan/Clio, we survey a subset these SEDselected targets (~20). In contrast to previous disk-selected planet surveys (e.g.: Janson et al. 2013, Wahhaj et al. 2013) we image primarily in the thermal IR (L'-band), where planet-to-star contrast is more favorable and background contaminants less numerous. Thus far, two of our survey targets host planet-mass companions, both of which were discovered in L'-band after they were unrecognized or undetectable in H-band. For each system in our sample set, we will investigate whether the known companions and/or companions below our detection threshold could be responsible for the disk architecture. Ultimately, we will increase our effective sample size by incorporating detection limits from surveys that have independently targeted some of our systems of interest. In this way we will refine the conditions under which disk SED-based target selection is likely to be useful and valid.
NASA Astrophysics Data System (ADS)
Bartnik, Andrzej; Fiedorowicz, Henryk; Jarocki, Roman; Kostecki, Jerzy; Rakowski, Rafał; Szczurek, Mirosław
2005-09-01
Organic polymers (PMMA, PTFE, PET, and PI) are considered as the important materials in microengineering, especially for biological and medical applications. Micromachining of such materials is possible with the use of different techniques that involve electromagnetic radiation or charged particle beams. Another possibility of high aspect ratio micromachining of PTFE is direct photo-etching using synchrotron radiation. X-ray and ultraviolet radiation from other sources, for micromachining of materials by direct photo-etching can be also applied. In this paper we present the results of investigation of a wide band soft X-ray source and its application for direct photo-etching of organic polymers. X-ray radiation in the wavelength range from about 3 nm to 20 nm was produced as a result of irradiation of a double-stream gas puff target with laser pulses of energy 0.8 J and time duration of about 3 ns. The spectra, plasma size and absolute energies of soft X-ray pulses for different gas puff targets were measured. Photo-etching process of polymers irradiated with the use of the soft X-ray radiation was analyzed and investigated. Samples of organic polymers were placed inside a vacuum chamber of the x-ray source, close to the gas puff target at the distance of about 2 cm from plasmas created by focused laser pulses. A fine metal grid placed in front of the samples was used as a mask to form structures by x-ray ablation. The results of photo-etching process for several minutes exposition with l0Hz repetition rate were presented. High ablation efficiency was obtained with the use of the gas puff target containing xenon surrounded by helium.
Advanced pushbroom hyperspectral LWIR imagers
NASA Astrophysics Data System (ADS)
Holma, Hannu; Hyvärinen, Timo; Lehtomaa, Jarmo; Karjalainen, Harri; Jaskari, Risto
2009-05-01
Performance studies and instrument designs for hyperspectral pushbroom imagers in thermal wavelength region are introduced. The studies involve imaging systems based on both MCT and microbolometer detector. All the systems employ pushbroom imaging spectrograph with transmission grating and on-axis optics. The aim of the work was to design high performance instruments with good image quality and compact size for various application requirements. A big challenge in realizing these goals without considerable cooling of the whole instrument is to control the instrument radiation from all the surfaces of the instrument itself. This challenge is even bigger in hyperspectral instruments, where the optical power from the target is spread spectrally over tens of pixels, but the instrument radiation is not dispersed. Without any suppression, the instrument radiation can overwhelm the radiation from the target by 1000 times. In the first imager design, BMC-technique (background monitoring on-chip), background suppression and temperature stabilization have been combined with cryo-cooled MCT-detector. The performance of a very compact hyperspectral imager with 84 spectral bands and 384 spatial samples has been studied and NESR of 18 mW/(m2srμm) at 10 μm wavelength for 300 K target has been achieved. This leads to SNR of 580. These results are based on a simulation model. The second version of the imager with an uncooled microbolometer detector and optics in ambient temperature aims at imaging targets at higher temperatures or with illumination. Heater rods with ellipsoidal reflectors can be used to illuminate the swath line of the hyperspectral imager on a target or sample, like drill core in mineralogical analysis. Performance characteristics for microbolometer version have been experimentally verified.
Effects of task constraints on reaching kinematics by healthy adults.
Wu, Ching-Yi; Lin, Keh-Chung; Lin, Kwan-Hwa; Chang, Chein-Wei; Chen, Chia-Ling
2005-06-01
Understanding the control of movement requires an awareness of how tasks constrain movements. The present study investigated the effects of two types of task constraints--spatial accuracy (effector size) and target location--on reaching kinematics. 15 right-handed healthy young adults (7 men, 8 women) whose mean age was 23.6 yr. (SD=3.9 yr.) performed the ringing task under six conditions, formed by the crossing of effector size (larger vs smaller size) and target location (left, right, or a central position). Significant main effects of effector size and target location were found for peak velocity and movement time. There was a significant interaction for the percentage of time to peak velocity. The findings suggested that task constraints may modulate movement performance in specific ways. Effects of effector size might be a consequence of feedforward and feedback control, and location effects might be influenced by both biomechanical and neurological factors.
Local processes in preattentive feature detection.
Bacon, W F; Egeth, H E
1991-02-01
Sagi and Julesz (1987) claimed that for a target to be detected preattentively, it must be within some small critical distance of a nontarget. The independent effects of separation and display size, which were confounded in the Sagi and Julesz experiments, were examined. Experiments 1 and 2 revealed that in tasks requiring search for a color-defined target, target-nontarget separation had no effect on reaction time (RT). Display size, however, was inversely related to RT. Experiment 3 ruled out the possibility that the decreasing function of RT with display size was due to arousal caused by higher display luminance. When nontarget grouping was inhibited, (Exp. 4) it was found that RT no longer decreased with increasing display size. This suggests that nontarget grouping may have been the cause of the improved performance at larger display sizes. Experiments 5 and 6 extended the results to line segments, the stimuli used by Sagi and Julesz.
NASA Astrophysics Data System (ADS)
Stewart, James M. P.; Ansell, Steve; Lindsay, Patricia E.; Jaffray, David A.
2015-12-01
Advances in precision microirradiators for small animal radiation oncology studies have provided the framework for novel translational radiobiological studies. Such systems target radiation fields at the scale required for small animal investigations, typically through a combination of on-board computed tomography image guidance and fixed, interchangeable collimators. Robust targeting accuracy of these radiation fields remains challenging, particularly at the millimetre scale field sizes achievable by the majority of microirradiators. Consistent and reproducible targeting accuracy is further hindered as collimators are removed and inserted during a typical experimental workflow. This investigation quantified this targeting uncertainty and developed an online method based on a virtual treatment isocenter to actively ensure high performance targeting accuracy for all radiation field sizes. The results indicated that the two-dimensional field placement uncertainty was as high as 1.16 mm at isocenter, with simulations suggesting this error could be reduced to 0.20 mm using the online correction method. End-to-end targeting analysis of a ball bearing target on radiochromic film sections showed an improved targeting accuracy with the three-dimensional vector targeting error across six different collimators reduced from 0.56+/- 0.05 mm (mean ± SD) to 0.05+/- 0.05 mm for an isotropic imaging voxel size of 0.1 mm.
Brazilian Soybean Yields and Yield Gaps Vary with Farm Size
NASA Astrophysics Data System (ADS)
Jeffries, G. R.; Cohn, A.; Griffin, T. S.; Bragança, A.
2017-12-01
Understanding the farm size-specific characteristics of crop yields and yield gaps may help to improve yields by enabling better targeting of technical assistance and agricultural development programs. Linking remote sensing-based yield estimates with property boundaries provides a novel view of the relationship between farm size and yield structure (yield magnitude, gaps, and stability over time). A growing literature documents variations in yield gaps, but largely ignores the role of farm size as a factor shaping yield structure. Research on the inverse farm size-productivity relationship (IR) theory - that small farms are more productive than large ones all else equal - has documented that yield magnitude may vary by farm size, but has not considered other yield structure characteristics. We examined farm size - yield structure relationships for soybeans in Brazil for years 2001-2015. Using out-of-sample soybean yield predictions from a statistical model, we documented 1) gaps between the 95th percentile of attained yields and mean yields within counties and individual fields, and 2) yield stability defined as the standard deviation of time-detrended yields at given locations. We found a direct relationship between soy yields and farm size at the national level, while the strength and the sign of the relationship varied by region. Soybean yield gaps were found to be inversely related to farm size metrics, even when yields were only compared to farms of similar size. The relationship between farm size and yield stability was nonlinear, with mid-sized farms having the most stable yields. The work suggests that farm size is an important factor in understanding yield structure and that opportunities for improving soy yields in Brazil are greatest among smaller farms.
miR-11 regulates pupal size of Drosophila melanogaster via directly targeting Ras85D.
Li, Yao; Li, Shengjie; Jin, Ping; Chen, Liming; Ma, Fei
2017-01-01
MicroRNAs play diverse roles in various physiological processes during Drosophila development. In the present study, we reported that miR-11 regulates pupal size during Drosophila metamorphosis via targeting Ras85D with the following evidences: pupal size was increased in the miR-11 deletion mutant; restoration of miR-11 in the miR-11 deletion mutant rescued the increased pupal size phenotype observed in the miR-11 deletion mutant; ectopic expression of miR-11 in brain insulin-producing cells (IPCs) and whole body shows consistent alteration of pupal size; Dilps and Ras85D expressions were negatively regulated by miR-11 in vivo; miR-11 targets Ras85D through directly binding to Ras85D 3'-untranslated region in vitro; removal of one copy of Ras85D in the miR-11 deletion mutant rescued the increased pupal size phenotype observed in the miR-11 deletion mutant. Thus, our current work provides a novel mechanism of pupal size determination by microRNAs during Drosophila melanogaster metamorphosis. Copyright © 2017 the American Physiological Society.
Interactions between target location and reward size modulate the rate of microsaccades in monkeys
Tokiyama, Stefanie; Lisberger, Stephen G.
2015-01-01
We have studied how rewards modulate the occurrence of microsaccades by manipulating the size of an expected reward and the location of the cue that sets the expectations for future reward. We found an interaction between the size of the reward and the location of the cue. When monkeys fixated on a cue that signaled the size of future reward, the frequency of microsaccades was higher if the monkey expected a large vs. a small reward. When the cue was presented at a site in the visual field that was remote from the position of fixation, reward size had the opposite effect: the frequency of microsaccades was lower when the monkey was expecting a large reward. The strength of pursuit initiation also was affected by reward size and by the presence of microsaccades just before the onset of target motion. The gain of pursuit initiation increased with reward size and decreased when microsaccades occurred just before or after the onset of target motion. The effect of the reward size on pursuit initiation was much larger than any indirect effects reward might cause through modulation of the rate of microsaccades. We found only a weak relationship between microsaccade direction and the location of the exogenous cue relative to fixation position, even in experiments where the location of the cue indicated the direction of target motion. Our results indicate that the expectation of reward is a powerful modulator of the occurrence of microsaccades, perhaps through attentional mechanisms. PMID:26311180
Grote, Ann B.; Bailey, Michael M.; Zydlewski, Joseph D.; Hightower, Joseph E.
2014-01-01
We investigated the fish community approaching the Veazie Dam on the Penobscot River, Maine, prior to implementation of a major dam removal and river restoration project. Multibeam sonar (dual-frequency identification sonar, DIDSON) surveys were conducted continuously at the fishway entrance from May to July in 2011. A 5% subsample of DIDSON data contained 43 793 fish targets, the majority of which were of Excellent (15.7%) or Good (73.01%) observation quality. Excellent quality DIDSON targets (n = 6876) were apportioned by species using a Bayesian mixture model based on four known fork length distributions (river herring (alewife,Alosa psuedoharengus, and blueback herring, Alosa aestivalis), American shad, Alosa sapidissima) and two size classes (one sea-winter and multi-sea-winter) of Atlantic salmon (Salmo salar). 76.2% of targets were assigned to the American shad distribution; Atlantic salmon accounted for 15.64%, and river herring 8.16% of observed targets. Shad-sized (99.0%) and salmon-sized (99.3%) targets approached the fishway almost exclusively during the day, whereas river herring-sized targets were observed both during the day (51.1%) and at night (48.9%). This approach demonstrates how multibeam sonar imaging can be used to evaluate community composition and species-specific movement patterns in systems where there is little overlap in the length distributions of target species.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pawellek, Nicole; Krivov, Alexander V.; Marshall, Jonathan P.
The radii of debris disks and the sizes of their dust grains are important tracers of the planetesimal formation mechanisms and physical processes operating in these systems. Here we use a representative sample of 34 debris disks resolved in various Herschel Space Observatory (Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA) programs to constrain the disk radii and the size distribution of their dust. While we modeled disks with both warm and cold components, and identified warm inner disks around about two-thirds of the stars, we focusmore » our analysis only on the cold outer disks, i.e., Kuiper-belt analogs. We derive the disk radii from the resolved images and find a large dispersion for host stars of any spectral class, but no significant trend with the stellar luminosity. This argues against ice lines as a dominant player in setting the debris disk sizes, since the ice line location varies with the luminosity of the central star. Fixing the disk radii to those inferred from the resolved images, we model the spectral energy distribution to determine the dust temperature and the grain size distribution for each target. While the dust temperature systematically increases toward earlier spectral types, the ratio of the dust temperature to the blackbody temperature at the disk radius decreases with the stellar luminosity. This is explained by a clear trend of typical sizes increasing toward more luminous stars. The typical grain sizes are compared to the radiation pressure blowout limit s {sub blow} that is proportional to the stellar luminosity-to-mass ratio and thus also increases toward earlier spectral classes. The grain sizes in the disks of G- to A-stars are inferred to be several times s {sub blow} at all stellar luminosities, in agreement with collisional models of debris disks. The sizes, measured in the units of s {sub blow}, appear to decrease with the luminosity, which may be suggestive of the disk's stirring level increasing toward earlier-type stars. The dust opacity index β ranges between zero and two, and the size distribution index q varies between three and five for all the disks in the sample.« less
Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M
2018-04-01
A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.
FIMic: design for ultimate 3D-integral microscopy of in-vivo biological samples
Scrofani, G.; Sola-Pikabea, J.; Llavador, A.; Sanchez-Ortiga, E.; Barreiro, J. C.; Saavedra, G.; Garcia-Sucerquia, J.; Martínez-Corral, M.
2017-01-01
In this work, Fourier integral microscope (FIMic), an ultimate design of 3D-integral microscopy, is presented. By placing a multiplexing microlens array at the aperture stop of the microscope objective of the host microscope, FIMic shows extended depth of field and enhanced lateral resolution in comparison with regular integral microscopy. As FIMic directly produces a set of orthographic views of the 3D-micrometer-sized sample, it is suitable for real-time imaging. Following regular integral-imaging reconstruction algorithms, a 2.75-fold enhanced depth of field and 2-time better spatial resolution in comparison with conventional integral microscopy is reported. Our claims are supported by theoretical analysis and experimental images of a resolution test target, cotton fibers, and in-vivo 3D-imaging of biological specimens. PMID:29359107
The effect of initial pressure on growth of FeNPs in amorphous carbon films
NASA Astrophysics Data System (ADS)
Mashayekhi, Fatemeh; Shafiekhani, Azizollah; Sebt, S. Ali; Darabi, Elham
2018-04-01
Iron nanoparticles in amorphous hydrogenated carbon films (FeNPs@a-C:H) were prepared with RF-sputtering and RFPECVD methods by acetylene gas and Fe target. In this paper, deposition and sputtering process were carried out under influence of different initial pressure gas. The morphology and roughness of surface of samples were studied by AFM technique and also TEM images show the exact size of FeNPs and encapsulated FeNPs@a-C:H. The localized surface plasmon resonance peak (LSPR) of FeNPs was studied using UV-vis absorption spectrum. The results show that the intensity and position of LSPR peak are increased by increasing initial pressure. Also, direct energy gap of samples obtained by Tauc law is decreased with respect to increasing initial pressure.
A long-term target detection approach in infrared image sequence
NASA Astrophysics Data System (ADS)
Li, Hang; Zhang, Qi; Wang, Xin; Hu, Chao
2016-10-01
An automatic target detection method used in long term infrared (IR) image sequence from a moving platform is proposed. Firstly, based on POME(the principle of maximum entropy), target candidates are iteratively segmented. Then the real target is captured via two different selection approaches. At the beginning of image sequence, the genuine target with litter texture is discriminated from other candidates by using contrast-based confidence measure. On the other hand, when the target becomes larger, we apply online EM method to estimate and update the distributions of target's size and position based on the prior detection results, and then recognize the genuine one which satisfies both the constraints of size and position. Experimental results demonstrate that the presented method is accurate, robust and efficient.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Y; Giebeler, A; Mascia, A
Purpose: To quantitatively evaluate dosimetric consequence of spot size variations and validate beam-matching criteria for commissioning a pencil beam model for multiple treatment rooms. Methods: A planning study was first conducted by simulating spot size variations to systematically evaluate dosimetric impact of spot size variations in selected cases, which was used to establish the in-air spot size tolerance for beam matching specifications. A beam model in treatment planning system was created using in-air spot profiles acquired in one treatment room. These spot profiles were also acquired from another treatment room for assessing the actual spot size variations between the twomore » treatment rooms. We created twenty five test plans with targets of different sizes at different depths, and performed dose measurement along the entrance, proximal and distal target regions. The absolute doses at those locations were measured using ionization chambers at both treatment rooms, and were compared against the calculated doses by the beam model. Fifteen additional patient plans were also measured and included in our validation. Results: The beam model is relatively insensitive to spot size variations. With an average of less than 15% measured in-air spot size variations between two treatment rooms, the average dose difference was −0.15% with a standard deviation of 0.40% for 55 measurement points within target region; but the differences increased to 1.4%±1.1% in the entrance regions, which are more affected by in-air spot size variations. Overall, our single-room based beam model in the treatment planning system agreed with measurements in both rooms < 0.5% within the target region. For fifteen patient cases, the agreement was within 1%. Conclusion: We have demonstrated that dosimetrically equivalent machines can be established when in-air spot size variations are within 15% between the two treatment rooms.« less
Burnham-Marusich, Amanda R; Plechaty, Anna M; Berninsone, Patricia M
2014-09-01
Currently, there are few methods to detect differences in posttranslational modifications (PTMs) in a specific manner from complex mixtures. Thus, we developed an approach that combines the sensitivity and specificity of click chemistry with the resolution capabilities of 2D-DIGE. In "Click-DIGE", posttranslationally modified proteins are metabolically labeled with azido-substrate analogs, then size- and charge-matched alkyne-Cy3 or alkyne-Cy5 dyes are covalently attached to the azide of the PTM by click chemistry. The fluorescently-tagged protein samples are then multiplexed for 2DE analysis. Whereas standard DIGE labels all proteins, Click-DIGE focuses the analysis of protein differences to a targeted subset of posttranslationally modified proteins within a complex sample (i.e. specific labeling and analysis of azido glycoproteins within a cell lysate). Our data indicate that (i) Click-DIGE specifically labels azido proteins, (ii) the resulting Cy-protein conjugates are spectrally distinct, and (iii) the conjugates are size- and charge-matched at the level of 2DE. We demonstrate the utility of this approach by detecting multiple differentially expressed glycoproteins between a mutant cell line defective in UDP-galactose transport and the parental cell line. We anticipate that the diversity of azido substrates already available will enable Click-DIGE to be compatible with analysis of a wide range of PTMs. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Accurate and Reliable Prediction of the Binding Affinities of Macrocycles to Their Protein Targets.
Yu, Haoyu S; Deng, Yuqing; Wu, Yujie; Sindhikara, Dan; Rask, Amy R; Kimura, Takayuki; Abel, Robert; Wang, Lingle
2017-12-12
Macrocycles have been emerging as a very important drug class in the past few decades largely due to their expanded chemical diversity benefiting from advances in synthetic methods. Macrocyclization has been recognized as an effective way to restrict the conformational space of acyclic small molecule inhibitors with the hope of improving potency, selectivity, and metabolic stability. Because of their relatively larger size as compared to typical small molecule drugs and the complexity of the structures, efficient sampling of the accessible macrocycle conformational space and accurate prediction of their binding affinities to their target protein receptors poses a great challenge of central importance in computational macrocycle drug design. In this article, we present a novel method for relative binding free energy calculations between macrocycles with different ring sizes and between the macrocycles and their corresponding acyclic counterparts. We have applied the method to seven pharmaceutically interesting data sets taken from recent drug discovery projects including 33 macrocyclic ligands covering a diverse chemical space. The predicted binding free energies are in good agreement with experimental data with an overall root-mean-square error (RMSE) of 0.94 kcal/mol. This is to our knowledge the first time where the free energy of the macrocyclization of linear molecules has been directly calculated with rigorous physics-based free energy calculation methods, and we anticipate the outstanding accuracy demonstrated here across a broad range of target classes may have significant implications for macrocycle drug discovery.
M2K Planet Search: Spectroscopic Screening and Transit Photometry
NASA Astrophysics Data System (ADS)
Mann, Andrew; Gaidos, E.; Fischer, D.; Lepine, S.
2010-10-01
The M2K project is a search for planets orbiting nearby early M and late K dwarf drawn from the SUPERBLINK catalog. M and K dwarfs are highly attractive targets for finding low-mass and habitable planets because (1) close-in planets are more likely to orbit within their habitable zone, (2) planets orbiting them induce a larger Doppler signal and have deeper transits than similar planets around F, G, and early K type stars, (3) planet formation models predict they hold an abundance of super-Earth sized planets, and (4) they represent the vast majority of the stars close enough for direct imaging techniques. In spite of this, only 10% of late K and early M dwarfs are being monitored by current Doppler surveys. As part of the M2K project we have obtained low-resolution spectra for more than 2000 of our sample of 10,000 M and K dwarfs. We vet our sample by screening these stars for high metallicity and low chromospheric activity. We search for transits on targets showing high RMS Doppler signal and photometry candidates provided by SuperWASP project. By using "snapshot” photometry have been able to achieve sub-millimag photometry on numerous transit targets in the same night. With further follow-up observations we will be able to detect planets smaller than 10 Earth masses.
Revised scaling laws for asteroid disruptions
NASA Astrophysics Data System (ADS)
Jutzi, M.
2014-07-01
Models for the evolution of small-body populations (e.g., the asteroid main belt) of the solar system compute the time-dependent size and velocity distributions of the objects as a result of both collisional and dynamical processes. A scaling parameter often used in such numerical models is the critical specific impact energy Q^*_D, which results in the escape of half of the target's mass in a collision. The parameter Q^*_D is called the catastrophic impact energy threshold. We present recent improvements of the Smooth Particle Hydrodynamics (SPH) technique (Benz and Asphaug 1995, Jutzi et al. 2008, Jutzi 2014) for the modeling of the disruption of small bodies. Using the improved models, we then systematically study the effects of various target properties (e.g., strength, porosity, and friction) on the outcome of disruptive collisions (Figure), and we compute the corresponding Q^*_D curves as a function of target size. For a given specific impact energy and impact angle, the outcome of a collision in terms of Q^*_D does not only depend on the properties of the bodies involved, but also on the impact velocity and the size ratio of target/impactor. Leinhardt and Stewardt (2012) proposed scaling laws to predict the outcome of collisions with a wide range of impact velocities (m/s to km/s), target sizes and target/impactor mass ratios. These scaling laws are based on a "principal disruption curve" defined for collisions between equal-sized bodies: Q^*_{RD,γ = 1} = c^* 4/5 π ρ G R_{C1}^2, where the parameter c^* is a measure of the dissipation of energy within the target, R_{C1} the radius of a body with the combined mass of target and projectile and a density ρ = 1000 kg/m^3, and γ is the mass ratio. The dissipation parameter c^* is proposed to be 5±2 for bodies with strength and 1.9±0.3 for hydrodynamic bodies (Leinhardt and Stewardt 2012). We will present values for c^* based on our SPH simulations using various target properties and impact conditions. We will also discuss the validity of the principal disruption curve (with a single parameter c^*) for a wide range of sizes and impact velocities. Our preliminary results indicate that for a given target, c^* can vary significantly (by a factor of ˜ 10) as the impact velocity changes from subsonic to supersonic.
Contact Instrument Calibration Targets on Mars Rover Curiosity
2012-02-07
Two instruments at the end of the robotic arm on NASA Mars rover Curiosity will use calibration targets attached to a shoulder joint of the arm. The penny is a size reference giving the public a familiar object for perceiving size on Mars easily.
Using known populations of pronghorn to evaluate sampling plans and estimators
Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.
1995-01-01
Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.
Folate-targeted single-wall metal-organic nanotubes used as multifunctional drug carriers
NASA Astrophysics Data System (ADS)
Yang, Linyan; Liu, Min; Huang, Kebin; Ai, Xia; Li, Cun; Ma, Jifei; Jin, Tianming; Liu, Xin
2017-01-01
Doxorubicin (DOX) is a member of the anthracycline class of chemotherapeutic agents that are used for the treatment of many common human cancers. A self-assembled functionalized metal-organic nanotubes, SWMONTs could be loaded with the anticancer drug DOX. Via the modification of SWMONTs, DOX/SWMONTs-SiO2, DOX/SWMONTs-SiO2-NH2, DOX/SWMONTs-SiO2-NH2-FA samples could be obtained. The SEM characterization of the samples indicated that the particle size of DOX/SWMONTs-SiO2NH2 samples were smaller than 200 nm. Drug-release experiments implied that DOX from the DOX/SWMONTs-SiO2-NH2-FA samples could be released faster at acidic tumor tissue than at normal body fluid (pH7.4). DOX has strong cytotoxicity, and at 20 μg/mL dosage of DOX large amount of apoptotic cells could be seen. Cellular uptaking experiments were used to study the apoptotic mechanism, while for DOX/SWMONTs-SiO2-NH2-FA samples, the strong drug fluorescence was found in the cytoplasm rather than in the nucleus.
Huang, Kuo-Chen; Yeh, Po-Chan
2007-04-01
The present study investigated the effects of numeral size, spacing between targets, and exposure time on the discrimination performance by elderly and younger people using a liquid crystal display screen. Analysis showed size of numerals significantly affected discrimination, which increased with increasing numeral size. Spacing between targets also had a significant effect on discrimination, i.e., the larger the space between numerals, the better their discrimination. When the spacing between numerals increased to 4 or 5 points, however, discrimination did not increase beyond that for 3-point spacing. Although performance increased with increasing exposure time, the difference in discrimination at an exposure time of 0.8 vs 1.0 sec. was not significant. The accuracy by the elderly group was less than that by younger subjects.
Asteroid collisions: Target size effects and resultant velocity distributions
NASA Technical Reports Server (NTRS)
Ryan, Eileen V.
1993-01-01
To study the dynamic fragmentation of rock to simulate asteroid collisions, we use a 2-D, continuum damage numerical hydrocode which models two-body impacts. This hydrocode monitors stress wave propagation and interaction within the target body, and includes a physical model for the formation and growth of cracks in rock. With this algorithm we have successfully reproduced fragment size distributions and mean ejecta speeds from laboratory impact experiments using basalt, and weak and strong mortar as target materials. Using the hydrocode, we have determined that the energy needed to fracture a body has a much stronger dependence on target size than predicted from most scaling theories. In addition, velocity distributions obtained indicate that mean ejecta speeds resulting from large-body collisions do not exceed escape velocities.
Bulf, Hermann; Macchi Cassia, Viola; de Hevia, Maria Dolores
2014-01-01
A number of studies have shown strong relations between numbers and oriented spatial codes. For example, perceiving numbers causes spatial shifts of attention depending upon numbers' magnitude, in a way suggestive of a spatially oriented, mental representation of numbers. Here, we investigated whether this phenomenon extends to non-symbolic numbers, as well as to the processing of the continuous dimensions of size and brightness, exploring whether different quantitative dimensions are equally mapped onto space. After a numerical (symbolic Arabic digits or non-symbolic arrays of dots; Experiment 1) or a non-numerical cue (shapes of different size or brightness level; Experiment 2) was presented, participants' saccadic response to a target that could appear either on the left or the right side of the screen was registered using an automated eye-tracker system. Experiment 1 showed that, both in the case of Arabic digits and dot arrays, right targets were detected faster when preceded by large numbers, and left targets were detected faster when preceded by small numbers. Participants in Experiment 2 were faster at detecting right targets when cued by large-sized shapes and left targets when cued by small-sized shapes, whereas brightness cues did not modulate the detection of peripheral targets. These findings indicate that looking at a symbolic or a non-symbolic number induces attentional shifts to a peripheral region of space that is congruent with the numbers' relative position on a mental number line, and that a similar shift in visual attention is induced by looking at shapes of different size. More specifically, results suggest that, while the dimensions of number and size spontaneously map onto an oriented space, the dimension of brightness seems to be independent at a certain level of magnitude elaboration from the dimensions of spatial extent and number, indicating that not all continuous dimensions are equally mapped onto space.
Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R
2017-09-14
While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.
Ejected Particle Size Distributions from Shocked Metal Surfaces
Schauer, M. M.; Buttler, W. T.; Frayer, D. K.; ...
2017-04-12
Here, we present size distributions for particles ejected from features machined onto the surface of shocked Sn targets. The functional form of the size distributions is assumed to be log-normal, and the characteristic parameters of the distribution are extracted from the measured angular distribution of light scattered from a laser beam incident on the ejected particles. We also found strong evidence for a bimodal distribution of particle sizes with smaller particles evolved from features machined into the target surface and larger particles being produced at the edges of these features.
Ejected Particle Size Distributions from Shocked Metal Surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schauer, M. M.; Buttler, W. T.; Frayer, D. K.
Here, we present size distributions for particles ejected from features machined onto the surface of shocked Sn targets. The functional form of the size distributions is assumed to be log-normal, and the characteristic parameters of the distribution are extracted from the measured angular distribution of light scattered from a laser beam incident on the ejected particles. We also found strong evidence for a bimodal distribution of particle sizes with smaller particles evolved from features machined into the target surface and larger particles being produced at the edges of these features.
Ruan, Jia; Ren, Dong-xia; Yang, Dan-ni; Long, Pin-pin; Zhao, Hong-yue; Wang, Yi-qi; Li, Yong-xin
2015-07-01
To establish a rapid and sensitive method based on polymerase chain reaction (PCR) combined with capillary electrophoresis-laser induced fluorescence (CE-LIF) and microchip capillary electrophoresis-laser induced fluorescence (MCE-LIF) for detecting adenoviruses in fecal samples. The DNA of adenovirus in fecal samples were extracted by the commercial kits and the conserved region of hexon gene was selected as the target gene and amplified by PCR reaction. After labeling highly sensitive nucleic acid fluorescent dye SYBR Gold and SYBR Orange respectively, PCR amplification products were separated by CE and MCE under the optimized condition and detected by LIF detector. PCR amplification products could be detected within 9 min by CE-LIF and 6 min by MCE-LIF under the optimized separation condition. The sequenced PCR product showed good specificity in comparison with the prototype sequences from NCBI. The intraday and inter-day relative standard deviation (RSD) of the size (bp) of the target DNA was in the range of 1.14%-1.34% and 1.27%- 2.76%, respectively, for CE-LIF, and 1.18%-1.48% and 2.85%-4.06%, respectively, for MCE-LIF. The detection limits was 2.33 x 10(2) copies/mL for CE-LIF and 2.33 x 10(3) copies/mL for MCE-LIF. The two proposed methods were applied to detect fecal samples, both showing high accuracy. The two proposed methods of PCR-CE-LIF and PCR-MCE-LIF can detect adenovirus in fecal samples rapidly, sensitively and specifically.
Tabe, Yoko; Takemura, Hiroyuki; Kimura, Konobu; Takahashi, Toshihiro; Yang, Haeun; Tsuchiya, Koji; Konishi, Aya; Uchihashi, Kinya; Horii, Takashi; Ohsaka, Akimichi
2018-01-01
Morphological microscopic examinations of nucleated cells in body fluid (BF) samples are performed to screen malignancy. However, the morphological differentiation is time-consuming and labor-intensive. This study aimed to develop a new flowcytometry-based gating analysis mode “XN-BF gating algorithm” to detect malignant cells using an automated hematology analyzer, Sysmex XN-1000. XN-BF mode was equipped with WDF white blood cell (WBC) differential channel. We added two algorithms to the WDF channel: Rule 1 detects larger and clumped cell signals compared to the leukocytes, targeting the clustered malignant cells; Rule 2 detects middle sized mononuclear cells containing less granules than neutrophils with similar fluorescence signal to monocytes, targeting hematological malignant cells and solid tumor cells. BF samples that meet, at least, one rule were detected as malignant. To evaluate this novel gating algorithm, 92 various BF samples were collected. Manual microscopic differentiation with the May-Grunwald Giemsa stain and WBC count with hemocytometer were also performed. The performance of these three methods were evaluated by comparing with the cytological diagnosis. The XN-BF gating algorithm achieved sensitivity of 63.0% and specificity of 87.8% with 68.0% for positive predictive value and 85.1% for negative predictive value in detecting malignant-cell positive samples. Manual microscopic WBC differentiation and WBC count demonstrated 70.4% and 66.7% of sensitivities, and 96.9% and 92.3% of specificities, respectively. The XN-BF gating algorithm can be a feasible tool in hematology laboratories for prompt screening of malignant cells in various BF samples. PMID:29425230
Ai, Tomohiko; Tabe, Yoko; Takemura, Hiroyuki; Kimura, Konobu; Takahashi, Toshihiro; Yang, Haeun; Tsuchiya, Koji; Konishi, Aya; Uchihashi, Kinya; Horii, Takashi; Ohsaka, Akimichi
2018-01-01
Morphological microscopic examinations of nucleated cells in body fluid (BF) samples are performed to screen malignancy. However, the morphological differentiation is time-consuming and labor-intensive. This study aimed to develop a new flowcytometry-based gating analysis mode "XN-BF gating algorithm" to detect malignant cells using an automated hematology analyzer, Sysmex XN-1000. XN-BF mode was equipped with WDF white blood cell (WBC) differential channel. We added two algorithms to the WDF channel: Rule 1 detects larger and clumped cell signals compared to the leukocytes, targeting the clustered malignant cells; Rule 2 detects middle sized mononuclear cells containing less granules than neutrophils with similar fluorescence signal to monocytes, targeting hematological malignant cells and solid tumor cells. BF samples that meet, at least, one rule were detected as malignant. To evaluate this novel gating algorithm, 92 various BF samples were collected. Manual microscopic differentiation with the May-Grunwald Giemsa stain and WBC count with hemocytometer were also performed. The performance of these three methods were evaluated by comparing with the cytological diagnosis. The XN-BF gating algorithm achieved sensitivity of 63.0% and specificity of 87.8% with 68.0% for positive predictive value and 85.1% for negative predictive value in detecting malignant-cell positive samples. Manual microscopic WBC differentiation and WBC count demonstrated 70.4% and 66.7% of sensitivities, and 96.9% and 92.3% of specificities, respectively. The XN-BF gating algorithm can be a feasible tool in hematology laboratories for prompt screening of malignant cells in various BF samples.
Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.
You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary
2011-02-01
The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.
Yeasmin, Lubna; Akter, Shamima; Shahidul Islam, A M; Mizanur Rahman, Md; Akashi, Hidechika; Jesmin, Subrina
2014-07-01
This study aimed to assess whether teaching good cooking practices, food habits and sanitation to ultra-poor rural women in four rural communities of Rangpur district, Bangladesh, with a high density of extremely poor households, would improve the overall health of the community. The sample size was 200 respondents combined from the target and control areas. In the target area, twelve in-depth interviews and four focus group discussions were undertaken for knowledge dissemination. Descriptive and mixed-model analyses were performed. The results show that washing hands with soap was 1.35 times more likely in the target than the control group (p<0.01). Further, after intervention, there was a significant improvement in hand-washing behaviour: before cutting vegetables, preparing food, feeding a child and eating, and after defecating and cleaning a baby (p<0.05). Also, the target group was more likely to moderately and briefly boil their vegetables and were 19% less likely to use maximum heat when cooking vegetables than the control group (p<0.01). Improved knowledge and skills training of ultra-poor women reduces the loss of nutrients during food preparation and increases their hygiene through hand-washing in every-day life.
Ansar, Maria; Serrano, Daniel; Papademetriou, Iason; Bhowmick, Tridib Kumar; Muro, Silvia
2014-01-01
Targeting of drug carriers to cell-surface receptors involved in endocytosis is commonly used for intracellular drug delivery. However, most endocytic receptors mediate uptake via clathrin or caveolar pathways associated with ≤200-nm vesicles, restricting carrier design. We recently showed that endocytosis mediated by intercellular adhesion molecule 1 (ICAM-1), which differs from clathrin- and caveolar-mediated pathways, allows uptake of nano- and micro-carriers in cell culture and in vivo due to recruitment of cellular sphingomyelinases to the plasmalemma. This leads to ceramide generation at carrier binding sites and formation of actin stress-fibers, enabling engulfment and uptake of a wide size-range of carriers. Here we adapted this paradigm to enhance uptake of drug carriers targeted to receptors associated with size-restricted pathways. We coated sphingomyelinase onto model (polystyrene) submicro- and micro-carriers targeted to clathrin-associated mannose-6-phosphate receptor. In endothelial cells, this provided ceramide enrichment at the cell surface and actin stress-fiber formation, modifying the uptake pathway and enhancing carrier endocytosis without affecting targeting, endosomal transport, cell-associated degradation, or cell viability. This improvement depended on the carrier size and enzyme dose, and similar results were observed for other receptors (transferrin receptor) and cell types (epithelial cells). This phenomenon also enhanced tissue accumulation of carriers after intravenous injection in mice. Hence, it is possible to maintain targeting toward a selected receptor while bypassing natural size-restrictions of its associated endocytic route by functionalization of drug carriers with biological elements mimicking the ICAM-1 pathway. This strategy holds considerable promise to enhance flexibility of design of targeted drug delivery systems. PMID:24237309
Ansar, Maria; Serrano, Daniel; Papademetriou, Iason; Bhowmick, Tridib Kumar; Muro, Silvia
2013-12-23
Targeting of drug carriers to cell-surface receptors involved in endocytosis is commonly used for intracellular drug delivery. However, most endocytic receptors mediate uptake via clathrin or caveolar pathways associated with ≤200-nm vesicles, restricting carrier design. We recently showed that endocytosis mediated by intercellular adhesion molecule 1 (ICAM-1), which differs from clathrin- and caveolae-mediated pathways, allows uptake of nano- and microcarriers in cell culture and in vivo due to recruitment of cellular sphingomyelinases to the plasmalemma. This leads to ceramide generation at carrier binding sites and formation of actin stress-fibers, enabling engulfment and uptake of a wide size-range of carriers. Here we adapted this paradigm to enhance uptake of drug carriers targeted to receptors associated with size-restricted pathways. We coated sphingomyelinase onto model (polystyrene) submicro- and microcarriers targeted to clathrin-associated mannose-6-phosphate receptor. In endothelial cells, this provided ceramide enrichment at the cell surface and actin stress-fiber formation, modifying the uptake pathway and enhancing carrier endocytosis without affecting targeting, endosomal transport, cell-associated degradation, or cell viability. This improvement depended on the carrier size and enzyme dose, and similar results were observed for other receptors (transferrin receptor) and cell types (epithelial cells). This phenomenon also enhanced tissue accumulation of carriers after intravenous injection in mice. Hence, it is possible to maintain targeting toward a selected receptor while bypassing natural size restrictions of its associated endocytic route by functionalization of drug carriers with biological elements mimicking the ICAM-1 pathway. This strategy holds considerable promise to enhance flexibility of design of targeted drug delivery systems.
Do icon arrays help reduce denominator neglect?
Garcia-Retamero, Rocio; Galesic, Mirta; Gigerenzer, Gerd
2010-01-01
Denominator neglect is the focus on the number of times a target event has happened (e.g., the number of treated and nontreated patients who die) without considering the overall number of opportunities for it to happen (e.g., the overall number of treated and nontreated patients). In 2 studies, we addressed the effect of denominator neglect in problems involving treatment risk reduction where samples of treated and non-treated patients and the relative risk reduction were of different sizes. We also tested whether using icon arrays helps people take these different sample sizes into account. We especially focused on older adults, who are often more disadvantaged when making decisions about their health. . Study 1 was conducted on a laboratory sample using a within-subjects design; study 2 was conducted on a nonstudent sample interviewed through the Web using a between-subjects design. Accuracy of understanding risk reduction. Participants often paid too much attention to numerators and insufficient attention to denominators when numerical information about treatment risk reduction was provided. Adding icon arrays to the numerical information, however, drew participants' attention to the denominators and helped them make more accurate assessments of treatment risk reduction. Icon arrays were equally helpful to younger and older adults. Building on previous research showing that problems with understanding numerical information often do not reside in the mind but in the representation of the problem, the results show that icon arrays are an effective method of eliminating denominator neglect.
Modelling eye movements in a categorical search task
Zelinsky, Gregory J.; Adeli, Hossein; Peng, Yifan; Samaras, Dimitris
2013-01-01
We introduce a model of eye movements during categorical search, the task of finding and recognizing categorically defined targets. It extends a previous model of eye movements during search (target acquisition model, TAM) by using distances from an support vector machine classification boundary to create probability maps indicating pixel-by-pixel evidence for the target category in search images. Other additions include functionality enabling target-absent searches, and a fixation-based blurring of the search images now based on a mapping between visual and collicular space. We tested this model on images from a previously conducted variable set-size (6/13/20) present/absent search experiment where participants searched for categorically defined teddy bear targets among random category distractors. The model not only captured target-present/absent set-size effects, but also accurately predicted for all conditions the numbers of fixations made prior to search judgements. It also predicted the percentages of first eye movements during search landing on targets, a conservative measure of search guidance. Effects of set size on false negative and false positive errors were also captured, but error rates in general were overestimated. We conclude that visual features discriminating a target category from non-targets can be learned and used to guide eye movements during categorical search. PMID:24018720
NASA Astrophysics Data System (ADS)
Pinilla, P.; Tazzari, M.; Pascucci, I.; Youdin, A. N.; Garufi, A.; Manara, C. F.; Testi, L.; van der Plas, G.; Barenfeld, S. A.; Canovas, H.; Cox, E. G.; Hendler, N. P.; Pérez, L. M.; van der Marel, N.
2018-05-01
We analyze the dust morphology of 29 transition disks (TDs) observed with Atacama Large (sub-)Millimeter Array (ALMA) at (sub-)millimeter emission. We perform the analysis in the visibility plane to characterize the total flux, cavity size, and shape of the ring-like structure. First, we found that the M dust–M ⋆ relation is much flatter for TDs than the observed trends from samples of class II sources in different star-forming regions. This relation demonstrates that cavities open in high (dust) mass disks, independent of the stellar mass. The flatness of this relation contradicts the idea that TDs are a more evolved set of disks. Two potential reasons (not mutually exclusive) may explain this flat relation: the emission is optically thick or/and millimeter-sized particles are trapped in a pressure bump. Second, we discuss our results of the cavity size and ring width in the context of different physical processes for cavity formation. Photoevaporation is an unlikely leading mechanism for the origin of the cavity of any of the targets in the sample. Embedded giant planets or dead zones remain as potential explanations. Although both models predict correlations between the cavity size and the ring shape for different stellar and disk properties, we demonstrate that with the current resolution of the observations, it is difficult to obtain these correlations. Future observations with higher angular resolution observations of TDs with ALMA will help discern between different potential origins of cavities in TDs.
Rapid screening of the antimicrobial efficacy of Ag zeolites.
Tosheva, L; Belkhair, S; Gackowski, M; Malic, S; Al-Shanti, N; Verran, J
2017-09-01
A semi-quantitative screening method was used to compare the killing efficacy of Ag zeolites against bacteria and yeast as a function of the zeolite type, crystal size and concentration. The method, which substantially reduced labor, consumables and waste and provided an excellent preliminary screen, was further validated by quantitative plate count experiments. Two pairs of zeolite X and zeolite beta with different sizes (ca. 200nm and 2μm for zeolite X and ca. 250 and 500nm for zeolite beta) were tested against Escherichia coli (E. coli) and Candida albicans (C. albicans) at concentrations in the range 0.05-0.5mgml -1 . Reduction of the zeolite crystal size resulted in a decrease in the killing efficacy against both microorganisms. The semi-quantitative tests allowed convenient optimization of the zeolite concentrations to achieve targeted killing times. Zeolite beta samples showed higher activity compared to zeolite X despite their lower Ag content, which was attributed to the higher concentration of silver released from zeolite beta samples. Cytotoxicity measurements using peripheral blood mononuclear cells (PBMCs) indicated that Ag zeolite X was more toxic than Ag zeolite beta. However, the trends for the dependence of cytotoxicity on zeolite crystal size at different zeolite concentrations were different for the two zeolites and no general conclusions about zeolite cytotoxicity could be drawn from these experiments. This result indicates a complex relationship, requiring the necessity for individual cytotoxicity measurements for all antimicrobial applications based on the use of zeolites. Copyright © 2017 Elsevier B.V. All rights reserved.
Considering aspects of the 3Rs principles within experimental animal biology.
Sneddon, Lynne U; Halsey, Lewis G; Bury, Nic R
2017-09-01
The 3Rs - Replacement, Reduction and Refinement - are embedded into the legislation and guidelines governing the ethics of animal use in experiments. Here, we consider the advantages of adopting key aspects of the 3Rs into experimental biology, represented mainly by the fields of animal behaviour, neurobiology, physiology, toxicology and biomechanics. Replacing protected animals with less sentient forms or species, cells, tissues or computer modelling approaches has been broadly successful. However, many studies investigate specific models that exhibit a particular adaptation, or a species that is a target for conservation, such that their replacement is inappropriate. Regardless of the species used, refining procedures to ensure the health and well-being of animals prior to and during experiments is crucial for the integrity of the results and legitimacy of the science. Although the concepts of health and welfare are developed for model organisms, relatively little is known regarding non-traditional species that may be more ecologically relevant. Studies should reduce the number of experimental animals by employing the minimum suitable sample size. This is often calculated using power analyses, which is associated with making statistical inferences based on the P -value, yet P -values often leave scientists on shaky ground. We endorse focusing on effect sizes accompanied by confidence intervals as a more appropriate means of interpreting data; in turn, sample size could be calculated based on effect size precision. Ultimately, the appropriate employment of the 3Rs principles in experimental biology empowers scientists in justifying their research, and results in higher-quality science. © 2017. Published by The Company of Biologists Ltd.
Two sampling methods yield distinct microbial signatures in the nasopharynges of asthmatic children.
Pérez-Losada, Marcos; Crandall, Keith A; Freishtat, Robert J
2016-06-16
The nasopharynx is a reservoir for pathogens associated with respiratory illnesses, such as asthma. Next-generation sequencing (NGS) has been used to characterize the nasopharyngeal microbiome during health and disease. Most studies so far have surveyed the nasopharynx as a whole; however, less is known about spatial variation (biogeography) in nasal microenvironments and how sampling techniques may capture that microbial diversity. We used targeted 16S rRNA MiSeq sequencing and two different sampling strategies [nasal washes (NW) and nasal brushes (NB)] to characterize the nasopharyngeal microbiota in 30 asthmatic children. Nasal brushing is more abrasive than nasal washing and targeted the inner portion of the inferior turbinate. This region is expected to be different from other nasal microenvironments. Nasal washing is not spatially specific. Our 30 × 2 nasal microbiomes generated 1,474,497 sequences, from which we identified an average of 157 and 186 OTUs per sample in the NW and NB groups, respectively. Microbiotas from NB showed significantly higher alpha-diversity than microbiotas from NW. Similarly, both nasal microbiotas were distinct from each other (PCoA) and significantly differed in their community composition and abundance in at least 9 genera (effective size ≥1 %). Nasopharyngeal microenvironments in asthmatic children contain microbiotas with different diversity and structure. Nasal washes and brushes capture that diversity differently. Future microbial studies of the nasopharynx need to be aware of potential spatial variation (biogeography).
Utilizing Maximal Independent Sets as Dominating Sets in Scale-Free Networks
NASA Astrophysics Data System (ADS)
Derzsy, N.; Molnar, F., Jr.; Szymanski, B. K.; Korniss, G.
Dominating sets provide key solution to various critical problems in networked systems, such as detecting, monitoring, or controlling the behavior of nodes. Motivated by graph theory literature [Erdos, Israel J. Math. 4, 233 (1966)], we studied maximal independent sets (MIS) as dominating sets in scale-free networks. We investigated the scaling behavior of the size of MIS in artificial scale-free networks with respect to multiple topological properties (size, average degree, power-law exponent, assortativity), evaluated its resilience to network damage resulting from random failure or targeted attack [Molnar et al., Sci. Rep. 5, 8321 (2015)], and compared its efficiency to previously proposed dominating set selection strategies. We showed that, despite its small set size, MIS provides very high resilience against network damage. Using extensive numerical analysis on both synthetic and real-world (social, biological, technological) network samples, we demonstrate that our method effectively satisfies four essential requirements of dominating sets for their practical applicability on large-scale real-world systems: 1.) small set size, 2.) minimal network information required for their construction scheme, 3.) fast and easy computational implementation, and 4.) resiliency to network damage. Supported by DARPA, DTRA, and NSF.
Laboratory Measurements of Single-Particle Polarimetric Spectrum
NASA Astrophysics Data System (ADS)
Gritsevich, M.; Penttila, A.; Maconi, G.; Kassamakov, I.; Helander, P.; Puranen, T.; Salmi, A.; Hæggström, E.; Muinonen, K.
2017-12-01
Measuring scattering properties of different targets is important for material characterization, remote sensing applications, and for verifying theoretical results. Furthermore, there are usually simplifications made when we model targets and compute the scattering properties, e.g., ideal shape or constant optical parameters throughout the target material. Experimental studies help in understanding the link between the observed properties and computed results. Experimentally derived Mueller matrices of studied particles can be used as input for larger-scale scattering simulations, e.g., radiative transfer computations. This method allows to bypass the problem of using an idealized model for single-particle optical properties. While existing approaches offer ensemble- and orientation-averaged particle properties, our aim is to measure individual particles with controlled or known orientation. With the newly developed scatterometer, we aim to offer novel possibility to measure single, small (down to μm-scale) targets and their polarimetric spectra. This work presents an experimental setup that measures light scattered by a fixed small particle with dimensions ranging between micrometer and millimeter sizes. The goal of our setup is nondestructive characterization of such particles by measuring light of multiple wavelengths scattered in 360° in a horizontal plane by an ultrasonically levitating sample, whilst simultaneously controlling its 3D position and orientation. We describe the principles and design of our instrument and its calibration. We also present example measurements of real samples. This study was conducted under the support from the European Research Council, in the frame of the Advanced Grant project No. 320773 `Scattering and Absorption of Electromagnetic Waves in Particulate Media' (SAEMPL).
Toraih, Eman A.; Ibrahiem, Afaf; Abdeldayem, Hala; Mohamed, Amany O.; Abdel-Daim, Mohamed M.
2017-01-01
Previous reports have suggested the significant association of miRNAs aberrant expression with tumor initiation, progression and metastasis in cancer, including gastrointestinal (GI) cancers. The current preliminary study aimed to evaluate the relative expression levels of miR-196a2 and three of its selected apoptosis-related targets; ANXA1, DFFA and PDCD4 in a sample of GI cancer patients. Quantitative real-time PCR for miR-196a2 and its selected mRNA targets, as well as immunohistochemical assay for annexin A1 protein expression were detected in 58 tissues with different GI cancer samples. In addition, correlation with the clinicopathological features and in silico network analysis of the selected molecular markers were analyzed. Stratified analyses by cancer site revealed elevated levels of miR-196a2 and low expression of the selected target genes. Annexin protein expression was positively correlated with its gene expression profile. In colorectal cancer, miR-196a over-expression was negatively correlated with annexin A1 protein expression (r = -0.738, p < 0.001), and both were indicators of unfavorable prognosis in terms of poor differentiation, larger tumor size, and advanced clinical stage. Taken together, aberrant expression of miR-196a2 and the selected apoptosis-related biomarkers might be involved in GI cancer development and progression and could have potential diagnostic and prognostic roles in these types of cancer; particularly colorectal cancer, provided the results experimentally validated and confirmed in larger multi-center studies. PMID:29091952
Spallation-induced roughness promoting high spatial frequency nanostructure formation on Cr
NASA Astrophysics Data System (ADS)
Abou-Saleh, A.; Karim, E. T.; Maurice, C.; Reynaud, S.; Pigeon, F.; Garrelie, F.; Zhigilei, L. V.; Colombier, J. P.
2018-04-01
Interaction of ultrafast laser pulses with metal surfaces in the spallation regime can result in the formation of anisotropic nanoscale surface morphology commonly referred to as laser-induced periodic surface structures (LIPSS) or ripples. The surface structures generated by a single pulse irradiation of monocrystalline Cr samples are investigated experimentally and computationally for laser fluences that produce high spatial frequency nanostructures in the multi-pulse irradiation regime. Electron microscopy reveals distinct response of samples with different crystallographic surface orientations, with (100) surfaces exhibiting the formation of more refined nanostructure by a single pulse irradiation and a more pronounced LIPSS after two laser pulses as compared to (110) surfaces. A large-scale molecular dynamics simulation of laser interaction with a (100) Cr target provides detailed information on processes responsible for spallation of a liquid layer, redistribution of molten material, and rapid resolidification of the target. The nanoscale roughness of the resolidified surface predicted in the simulation features elongated frozen nanospikes, nanorims and nanocavities with dimensions and surface density similar to those in the surface morphology observed for (100) Cr target with atomic force microscopy. The results of the simulation suggest that the types, sizes and dimensions of the nanoscale surface features are defined by the competition between the evolution of transient liquid structures generated in the spallation process and the rapid resolidification of the surface region of the target. The spallation-induced roughness is likely to play a key role in triggering the generation of high-frequency LIPSS upon irradiation by multiple laser pulses.
Herbold, Craig W.; Pelikan, Claus; Kuzyk, Orest; Hausmann, Bela; Angel, Roey; Berry, David; Loy, Alexander
2015-01-01
High throughput sequencing of phylogenetic and functional gene amplicons provides tremendous insight into the structure and functional potential of complex microbial communities. Here, we introduce a highly adaptable and economical PCR approach to barcoding and pooling libraries of numerous target genes. In this approach, we replace gene- and sequencing platform-specific fusion primers with general, interchangeable barcoding primers, enabling nearly limitless customized barcode-primer combinations. Compared to barcoding with long fusion primers, our multiple-target gene approach is more economical because it overall requires lower number of primers and is based on short primers with generally lower synthesis and purification costs. To highlight our approach, we pooled over 900 different small-subunit rRNA and functional gene amplicon libraries obtained from various environmental or host-associated microbial community samples into a single, paired-end Illumina MiSeq run. Although the amplicon regions ranged in size from approximately 290 to 720 bp, we found no significant systematic sequencing bias related to amplicon length or gene target. Our results indicate that this flexible multiplexing approach produces large, diverse, and high quality sets of amplicon sequence data for modern studies in microbial ecology. PMID:26236305
Opsahl, Stephen P.; Crow, Cassi L.
2014-01-01
During collection of streambed-sediment samples, additional samples from a subset of three sites (the SAR Elmendorf, SAR 72, and SAR McFaddin sites) were processed by using a 63-µm sieve on one aliquot and a 2-mm sieve on a second aliquot for PAH and n-alkane analyses. The purpose of analyzing PAHs and n-alkanes on a sample containing sand, silt, and clay versus a sample containing only silt and clay was to provide data that could be used to determine if these organic constituents had a greater affinity for silt- and clay-sized particles relative to sand-sized particles. The greater concentrations of PAHs in the <63-μm size-fraction samples at all three of these sites are consistent with a greater percentage of binding sites associated with fine-grained (<63 μm) sediment versus coarse-grained (<2 mm) sediment. The larger difference in total PAHs between the <2-mm and <63-μm size-fraction samples at the SAR Elmendorf site might be related to the large percentage of sand in the <2-mm size-fraction sample which was absent in the <63-μm size-fraction sample. In contrast, the <2-mm size-fraction sample collected from the SAR McFaddin site contained very little sand and was similar in particle-size composition to the <63-μm size-fraction sample.
HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE
NASA Technical Reports Server (NTRS)
De, Salvo L. J.
1994-01-01
HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.
Study samples are too small to produce sufficiently precise reliability coefficients.
Charter, Richard A
2003-04-01
In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.
Frank R. Thompson; Monica J. Schwalbach
1995-01-01
We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...
Label-free isolation of prostate circulating tumor cells using Vortex microfluidic technology.
Renier, Corinne; Pao, Edward; Che, James; Liu, Haiyan E; Lemaire, Clementine A; Matsumoto, Melissa; Triboulet, Melanie; Srivinas, Sandy; Jeffrey, Stefanie S; Rettig, Matthew; Kulkarni, Rajan P; Di Carlo, Dino; Sollier-Christen, Elodie
2017-01-01
There has been increased interest in utilizing non-invasive "liquid biopsies" to identify biomarkers for cancer prognosis and monitoring, and to isolate genetic material that can predict response to targeted therapies. Circulating tumor cells (CTCs) have emerged as such a biomarker providing both genetic and phenotypic information about tumor evolution, potentially from both primary and metastatic sites. Currently, available CTC isolation approaches, including immunoaffinity and size-based filtration, have focused on high capture efficiency but with lower purity and often long and manual sample preparation, which limits the use of captured CTCs for downstream analyses. Here, we describe the use of the microfluidic Vortex Chip for size-based isolation of CTCs from 22 patients with advanced prostate cancer and, from an enumeration study on 18 of these patients, find that we can capture CTCs with high purity (from 1.74 to 37.59%) and efficiency (from 1.88 to 93.75 CTCs/7.5 mL) in less than 1 h. Interestingly, more atypical large circulating cells were identified in five age-matched healthy donors (46-77 years old; 1.25-2.50 CTCs/7.5 mL) than in five healthy donors <30 years old (21-27 years old; 0.00 CTC/7.5 mL). Using a threshold calculated from the five age-matched healthy donors (3.37 CTCs/mL), we identified CTCs in 80% of the prostate cancer patients. We also found that a fraction of the cells collected (11.5%) did not express epithelial prostate markers (cytokeratin and/or prostate-specific antigen) and that some instead expressed markers of epithelial-mesenchymal transition, i.e., vimentin and N-cadherin. We also show that the purity and DNA yield of isolated cells is amenable to targeted amplification and next-generation sequencing, without whole genome amplification, identifying unique mutations in 10 of 15 samples and 0 of 4 healthy samples.
Kanda, Kojun; Pflug, James M; Sproul, John S; Dasenko, Mark A; Maddison, David R
2015-01-01
In this paper we explore high-throughput Illumina sequencing of nuclear protein-coding, ribosomal, and mitochondrial genes in small, dried insects stored in natural history collections. We sequenced one tenebrionid beetle and 12 carabid beetles ranging in size from 3.7 to 9.7 mm in length that have been stored in various museums for 4 to 84 years. Although we chose a number of old, small specimens for which we expected low sequence recovery, we successfully recovered at least some low-copy nuclear protein-coding genes from all specimens. For example, in one 56-year-old beetle, 4.4 mm in length, our de novo assembly recovered about 63% of approximately 41,900 nucleotides in a target suite of 67 nuclear protein-coding gene fragments, and 70% using a reference-based assembly. Even in the least successfully sequenced carabid specimen, reference-based assembly yielded fragments that were at least 50% of the target length for 34 of 67 nuclear protein-coding gene fragments. Exploration of alternative references for reference-based assembly revealed few signs of bias created by the reference. For all specimens we recovered almost complete copies of ribosomal and mitochondrial genes. We verified the general accuracy of the sequences through comparisons with sequences obtained from PCR and Sanger sequencing, including of conspecific, fresh specimens, and through phylogenetic analysis that tested the placement of sequences in predicted regions. A few possible inaccuracies in the sequences were detected, but these rarely affected the phylogenetic placement of the samples. Although our sample sizes are low, an exploratory regression study suggests that the dominant factor in predicting success at recovering nuclear protein-coding genes is a high number of Illumina reads, with success at PCR of COI and killing by immersion in ethanol being secondary factors; in analyses of only high-read samples, the primary significant explanatory variable was body length, with small beetles being more successfully sequenced.
Dasenko, Mark A.
2015-01-01
In this paper we explore high-throughput Illumina sequencing of nuclear protein-coding, ribosomal, and mitochondrial genes in small, dried insects stored in natural history collections. We sequenced one tenebrionid beetle and 12 carabid beetles ranging in size from 3.7 to 9.7 mm in length that have been stored in various museums for 4 to 84 years. Although we chose a number of old, small specimens for which we expected low sequence recovery, we successfully recovered at least some low-copy nuclear protein-coding genes from all specimens. For example, in one 56-year-old beetle, 4.4 mm in length, our de novo assembly recovered about 63% of approximately 41,900 nucleotides in a target suite of 67 nuclear protein-coding gene fragments, and 70% using a reference-based assembly. Even in the least successfully sequenced carabid specimen, reference-based assembly yielded fragments that were at least 50% of the target length for 34 of 67 nuclear protein-coding gene fragments. Exploration of alternative references for reference-based assembly revealed few signs of bias created by the reference. For all specimens we recovered almost complete copies of ribosomal and mitochondrial genes. We verified the general accuracy of the sequences through comparisons with sequences obtained from PCR and Sanger sequencing, including of conspecific, fresh specimens, and through phylogenetic analysis that tested the placement of sequences in predicted regions. A few possible inaccuracies in the sequences were detected, but these rarely affected the phylogenetic placement of the samples. Although our sample sizes are low, an exploratory regression study suggests that the dominant factor in predicting success at recovering nuclear protein-coding genes is a high number of Illumina reads, with success at PCR of COI and killing by immersion in ethanol being secondary factors; in analyses of only high-read samples, the primary significant explanatory variable was body length, with small beetles being more successfully sequenced. PMID:26716693
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bang, W.; Quevedo, H. J.; Bernstein, A. C.
We measured the average deuterium cluster size within a mixture of deuterium clusters and helium gas by detecting Rayleigh scattering signals. The average cluster size from the gas mixture was comparable to that from a pure deuterium gas when the total backing pressure and temperature of the gas mixture were the same as those of the pure deuterium gas. According to these measurements, the average size of deuterium clusters depends on the total pressure and not the partial pressure of deuterium in the gas mixture. To characterize the cluster source size further, a Faraday cup was used to measure themore » average kinetic energy of the ions resulting from Coulomb explosion of deuterium clusters upon irradiation by an intense ultrashort pulse. The deuterium ions indeed acquired a similar amount of energy from the mixture target, corroborating our measurements of the average cluster size. As the addition of helium atoms did not reduce the resulting ion kinetic energies, the reported results confirm the utility of using a known cluster source for beam-target-fusion experiments by introducing a secondary target gas.« less
Inter-molecular β-sheet structure facilitates lung-targeting siRNA delivery
NASA Astrophysics Data System (ADS)
Zhou, Jihan; Li, Dong; Wen, Hao; Zheng, Shuquan; Su, Cuicui; Yi, Fan; Wang, Jue; Liang, Zicai; Tang, Tao; Zhou, Demin; Zhang, Li-He; Liang, Dehai; Du, Quan
2016-03-01
Size-dependent passive targeting based on the characteristics of tissues is a basic mechanism of drug delivery. While the nanometer-sized particles are efficiently captured by the liver and spleen, the micron-sized particles are most likely entrapped within the lung owing to its unique capillary structure and physiological features. To exploit this property in lung-targeting siRNA delivery, we designed and studied a multi-domain peptide named K-β, which was able to form inter-molecular β-sheet structures. Results showed that K-β peptides and siRNAs formed stable complex particles of 60 nm when mixed together. A critical property of such particles was that, after being intravenously injected into mice, they further associated into loose and micron-sized aggregates, and thus effectively entrapped within the capillaries of the lung, leading to a passive accumulation and gene-silencing. The large size aggregates can dissociate or break down by the shear stress generated by blood flow, alleviating the pulmonary embolism. Besides the lung, siRNA enrichment and targeted gene silencing were also observed in the liver. This drug delivery strategy, together with the low toxicity, biodegradability, and programmability of peptide carriers, show great potentials in vivo applications.
Bang, W.; Quevedo, H. J.; Bernstein, A. C.; ...
2014-12-10
We measured the average deuterium cluster size within a mixture of deuterium clusters and helium gas by detecting Rayleigh scattering signals. The average cluster size from the gas mixture was comparable to that from a pure deuterium gas when the total backing pressure and temperature of the gas mixture were the same as those of the pure deuterium gas. According to these measurements, the average size of deuterium clusters depends on the total pressure and not the partial pressure of deuterium in the gas mixture. To characterize the cluster source size further, a Faraday cup was used to measure themore » average kinetic energy of the ions resulting from Coulomb explosion of deuterium clusters upon irradiation by an intense ultrashort pulse. The deuterium ions indeed acquired a similar amount of energy from the mixture target, corroborating our measurements of the average cluster size. As the addition of helium atoms did not reduce the resulting ion kinetic energies, the reported results confirm the utility of using a known cluster source for beam-target-fusion experiments by introducing a secondary target gas.« less
Magneto-Hydrodynamics Based Microfluidics
Qian, Shizhi; Bau, Haim H.
2009-01-01
In microfluidic devices, it is necessary to propel samples and reagents from one part of the device to another, stir fluids, and detect the presence of chemical and biological targets. Given the small size of these devices, the above tasks are far from trivial. Magnetohydrodynamics (MHD) offers an elegant means to control fluid flow in microdevices without a need for mechanical components. In this paper, we review the theory of MHD for low conductivity fluids and describe various applications of MHD such as fluid pumping, flow control in fluidic networks, fluid stirring and mixing, circular liquid chromatography, thermal reactors, and microcoolers. PMID:20046890
Pulse-Flow Microencapsulation System
NASA Technical Reports Server (NTRS)
Morrison, Dennis R.
2006-01-01
The pulse-flow microencapsulation system (PFMS) is an automated system that continuously produces a stream of liquid-filled microcapsules for delivery of therapeutic agents to target tissues. Prior microencapsulation systems have relied on batch processes that involve transfer of batches between different apparatuses for different stages of production followed by sampling for acquisition of quality-control data, including measurements of size. In contrast, the PFMS is a single, microprocessor-controlled system that performs all processing steps, including acquisition of quality-control data. The quality-control data can be used as real-time feedback to ensure the production of large quantities of uniform microcapsules.
Clan Genomics and the Complex Architecture of Human Disease
Belmont, John W.; Boerwinkle, Eric
2013-01-01
Human diseases are caused by alleles that encompass the full range of variant types, from single-nucleotide changes to copy-number variants, and these variations span a broad frequency spectrum, from the very rare to the common. The picture emerging from analysis of whole-genome sequences, the 1000 Genomes Project pilot studies, and targeted genomic sequencing derived from very large sample sizes reveals an abundance of rare and private variants. One implication of this realization is that recent mutation may have a greater influence on disease susceptibility or protection than is conferred by variations that arose in distant ancestors. PMID:21962505
Proton acceleration by irradiation of isolated spheres with an intense laser pulse
Ostermayr, Tobias M.; Haffa, D.; Hilz, P.; ...
2016-09-26
We report on experiments irradiating isolated plastic spheres with a peak laser intensity of 2–3 × 10 20 W cm –2. With a laser focal spot size of 10 μm full width half maximum (FWHM) the sphere diameter was varied between 520 nm and 19.3 μm. Maximum proton energies of ~ 25 MeV are achieved for targets matching the focal spot size of 10 μm in diameter or being slightly smaller. For smaller spheres the kinetic energy distributions of protons become nonmonotonic, indicating a change in the accelerating mechanism from ambipolar expansion towards a regime dominated by effects caused bymore » Coulomb repulsion of ions. The energy conversion efficiency from laser energy to proton kinetic energy is optimized when the target diameter matches the laser focal spot size with efficiencies reaching the percent level. The change of proton acceleration efficiency with target size can be attributed to the reduced cross-sectional overlap of subfocus targets with the laser. Reported experimental observations are in line with 3D3V particle in cell simulations. In conclusion, they make use of well-defined targets and point out pathways for future applications and experiments.« less
Proton acceleration by irradiation of isolated spheres with an intense laser pulse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ostermayr, Tobias M.; Haffa, D.; Hilz, P.
We report on experiments irradiating isolated plastic spheres with a peak laser intensity of 2–3 × 10 20 W cm –2. With a laser focal spot size of 10 μm full width half maximum (FWHM) the sphere diameter was varied between 520 nm and 19.3 μm. Maximum proton energies of ~ 25 MeV are achieved for targets matching the focal spot size of 10 μm in diameter or being slightly smaller. For smaller spheres the kinetic energy distributions of protons become nonmonotonic, indicating a change in the accelerating mechanism from ambipolar expansion towards a regime dominated by effects caused bymore » Coulomb repulsion of ions. The energy conversion efficiency from laser energy to proton kinetic energy is optimized when the target diameter matches the laser focal spot size with efficiencies reaching the percent level. The change of proton acceleration efficiency with target size can be attributed to the reduced cross-sectional overlap of subfocus targets with the laser. Reported experimental observations are in line with 3D3V particle in cell simulations. In conclusion, they make use of well-defined targets and point out pathways for future applications and experiments.« less
Perceived area and the luminosity threshold.
Bonato, F; Gilchrist, A L
1999-07-01
Observers made forced-choice opaque/luminous responses to targets of varying luminance and varying size presented (1) on the wall of a laboratory, (2) as a disk within an annulus, and (3) embedded within a Mondrian array presented within a vision tunnel. Lightness matches were also made for nearby opaque surfaces. The results show that the threshold luminance value at which a target begins to appear self-luminous increases with its size, defined as perceived size, not retinal size. More generally, the larger the target, the more an increase in its luminance induces grayness/blackness into the surround and the less it induces luminosity into the target, and vice versa. Corresponding to this luminosity/grayness tradeoff, there appears to be an invariant: Across a wide variety of conditions, a target begins to appear luminous when its luminance is about 1.7 times that of a surface that would appear white in the same illumination. These results show that the luminosity threshold behaves like a surface lightness value--the maximum lightness value, in fact--and is subject to the same laws of anchoring (such as the area rule proposed by Li & Gilchrist, 1999) as surface lightness.
Lagares, Antonio; Agaras, Betina; Bettiol, Marisa P; Gatti, Blanca M; Valverde, Claudio
2015-07-01
Species-specific genetic markers are crucial to develop faithful and sensitive molecular methods for the detection and identification of Pseudomonas aeruginosa (Pa). We have previously set up a PCR-RFLP protocol targeting oprF, the gene encoding the genus-specific outer membrane porin F, whose strong conservation and marked sequence diversity allowed detection and differentiation of environmental isolates (Agaras et al., 2012). Here, we evaluated the ability of the PCR-RFLP assay to genotype clinical isolates previously identified as Pa by conventional microbiological methods within a collection of 62 presumptive Pa isolates from different pediatric clinical samples and different sections of the Hospital de Niños "Sor María Ludovica" from La Plata, Argentina. All isolates, but one, gave an oprF amplicon consistent with that from reference Pa strains. The sequence of the smaller-sized amplicon revealed that the isolate was in fact a mendocina Pseudomonas strain. The oprF RFLP pattern generated with TaqI or HaeIII nucleases matched those of reference Pa strains for 59 isolates (96%). The other two Pa isolates (4%) revealed a different RFLP pattern based on HaeIII digestion, although oprF sequencing confirmed that Pa identification was correct. We next tested the effectiveness of the PCR-RFLP to detect pseudomonads on clinical samples of pediatric fibrocystic patients directly without sample cultivation. The expected amplicon and its cognate RFLP profile were obtained for all samples in which Pa was previously detected by cultivation-dependent methods. Altogether, these results provide the basis for the application of the oprF PCR-RFLP protocol to directly detect and identify Pa and other non-Pa pseudomonads in fibrocystic clinical samples. Copyright © 2015 Elsevier B.V. All rights reserved.
Factors influencing the specific interaction of Neisseria gonorrhoeae with transforming DNA.
Goodman, S D; Scocca, J J
1991-01-01
The specific interaction of transformable Neisseria gonorrhoeae with DNA depends on the recognition of specific 10-residue target sequences. The relative affinity for DNA between 3 and 17 kb in size appears to be linearly related to the frequency of targets on the segment and is unaffected by absolute size. The average frequency of targets in chromosomal DNA of N. gonorrhoeae appears to be approximately one per 1,000 bp. PMID:1909325
Strategic Deterrence in the Post-Start Era
1992-04-15
electricity, and supplies. Allegedly, weapons could be delivered so accurately that electric power plants were struck in such a fashion that repair time would... power plants , but also their relative outputs, then an analyst could construct a plot of electric power production capacity versus number of generating...target values being assigned which may be more appropriate to power plant size. Number of Targets Points per Target Power All sizes 17 58.82 1000 pts
Cratering and penetration experiments in teflon targets at velocities from 1 to 7 km/s
NASA Technical Reports Server (NTRS)
Horz, Friedrich; Cintala, Mark; Bernhard, Ronald P.; Cardenas, Frank; Davidson, William; Haynes, Gerald; See, Thomas H.; Winkler, Jerry; Knight, Jeffrey
1994-01-01
Approximately 20 sq m of protective thermal blankets, largely composed of Teflon, were retrieved from the Long Duration Exposure Facility after the spacecraft spent approximately 5.7 years in space. Examination of these blankets revealed that they contained thousands of hypervelocity impact features ranging from micron-sized craters to penetration holes several millimeters in diameter. We conducted impact experiments to reproduce such features and to understand the relationships between projectile size and the resulting crater or penetration hole diameter over a wide range of impact velocities. Such relationships are needed to derive the size and mass frequency distribution and flux of natural and man-made particles in low-earth orbit. Powder propellant and light-gas guns were used to launch soda-lime glass spheres into pure Teflon targets at velocities ranging from 1 to 7 km/s. Target thickness varied over more than three orders of magnitude from finite halfspace targets to very thin films. Cratering and penetration of massive Teflon targets is dominated by brittle failure and the development of extensive spall zones at the target's front and, if penetrated, the target's rear side. Mass removal by spallation at the back side of Teflon targets may be so severe that the absolute penetration hole diameter can become larger than that of a standard crater. The crater diameter in infinite halfspace Teflon targets increases, at otherwise constant impact conditions, with encounter velocity by a factor of V (exp 0.44). In contrast, the penetration hole size in very thin foils is essentially unaffected by impact velocity. Penetrations at target thicknesses intermediate to these extremes will scale with variable exponents of V. Our experimental matrix is sufficiently systematic and complete, up to 7 km/s, to make reasonable recommendations for velocity-scaling of Teflon craters and penetrations. We specifically suggest that cratering behavior and associated equations apply to all impacts in which the shock-pulse duration of the projectile is shorter than that assigned a unique projectile size, provided an impact velocity is known or assumed. This calibration seems superior to the traditional ballistic-limit approach.
7 CFR 51.1406 - Sample for grade or size determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., AND STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Sample for Grade Or Size Determination § 51.1406 Sample for grade or size determination. Each sample shall consist of 100 pecans. The...
Nikfarjam, Ali; Shokoohi, Mostafa; Shahesmaeili, Armita; Haghdoost, Ali Akbar; Baneshi, Mohammad Reza; Haji-Maghsoudi, Saiedeh; Rastegari, Azam; Nasehi, Abbas Ali; Memaryan, Nadereh; Tarjoman, Termeh
2016-05-01
For a better understanding of the current situation of drug use in Iran, we utilized the network scale-up approach to estimate the prevalence of illicit drug use in the entire country. We implemented a self-administered, street-based questionnaire to 7535 passersby from the general public over 18 years of age by street based random walk quota sampling (based on gender, age and socio-economic status) from 31 provinces in Iran. The sample size in each province was approximately 400, ranging from 200 to 1000. In each province 75% of sample was recruited from the capital and the remaining 25% was recruited from one of the large cities of that province through stratified sampling. The questionnaire comprised questions on demographic information as well as questions to measure the total network size of participants as well as the network size in each of seven drug use groups including Opium, Shire (combination of Opium residue and pure opium), Crystal Methamphetamine, heroin/crack (which in Iranian context is a cocaine-free drug that mostly contains heroin, codeine, morphine and caffeine with or without other drugs), Hashish, Methamphetamine/LSD/ecstasy, and injecting drugs. The estimated size for each group was adjusted for transmission and barrier ratios. The most common type of illicit drug used was opium with the prevalence of 1500 per 100,000 population followed by shire (660), crystal methamphetamine (590), hashish (470), heroin/crack (350), methamphetamine, LSD and ecstasy (300) and injecting drugs (280). All types of substances were more common among men than women. The use of opium, shire and injecting drugs was more common in individuals over 30 whereas the use of stimulants and hashish was largest among individuals between 18 and 30 years of age. It seems that younger individuals and women are more desired to use new synthetic drugs such as crystal methamphetamine. Extending the preventive programs especially in youth as like as scaling up harm reduction services would be the main priorities in prevention and control of substance use in Iran. Because of poor service coverage and high stigma in women, more targeted programs in this affected population are needed. Copyright © 2016 Elsevier B.V. All rights reserved.
Focant, Jean-François; Eppe, Gauthier; Massart, Anne-Cécile; Scholl, Georges; Pirard, Catherine; De Pauw, Edwin
2006-10-13
We report on the use of a state-of-the-art method for the measurement of selected polychlorinated dibenzo-p-dioxins, polychlorinated dibenzofurans and polychlorinated biphenyls in human serum specimens. The sample preparation procedure is based on manual small size solid-phase extraction (SPE) followed by automated clean-up and fractionation using multi-sorbent liquid chromatography columns. SPE cartridges and all clean-up columns are disposable. Samples are processed in batches of 20 units, including one blank control (BC) sample and one quality control (QC) sample. The analytical measurement is performed using gas chromatography coupled to isotope dilution high-resolution mass spectrometry. The sample throughput corresponds to one series of 20 samples per day, from sample reception to data quality cross-check and reporting, once the procedure has been started and series of samples keep being produced. Four analysts are required to ensure proper performances of the procedure. The entire procedure has been validated under International Organization for Standardization (ISO) 17025 criteria and further tested over more than 1500 unknown samples during various epidemiological studies. The method is further discussed in terms of reproducibility, efficiency and long-term stability regarding the 35 target analytes. Data related to quality control and limit of quantification (LOQ) calculations are also presented and discussed.
Role of step size and max dwell time in anatomy based inverse optimization for prostate implants
Manikandan, Arjunan; Sarkar, Biplab; Rajendran, Vivek Thirupathur; King, Paul R.; Sresty, N.V. Madhusudhana; Holla, Ragavendra; Kotur, Sachin; Nadendla, Sujatha
2013-01-01
In high dose rate (HDR) brachytherapy, the source dwell times and dwell positions are vital parameters in achieving a desirable implant dose distribution. Inverse treatment planning requires an optimal choice of these parameters to achieve the desired target coverage with the lowest achievable dose to the organs at risk (OAR). This study was designed to evaluate the optimum source step size and maximum source dwell time for prostate brachytherapy implants using an Ir-192 source. In total, one hundred inverse treatment plans were generated for the four patients included in this study. Twenty-five treatment plans were created for each patient by varying the step size and maximum source dwell time during anatomy-based, inverse-planned optimization. Other relevant treatment planning parameters were kept constant, including the dose constraints and source dwell positions. Each plan was evaluated for target coverage, urethral and rectal dose sparing, treatment time, relative target dose homogeneity, and nonuniformity ratio. The plans with 0.5 cm step size were seen to have clinically acceptable tumor coverage, minimal normal structure doses, and minimum treatment time as compared with the other step sizes. The target coverage for this step size is 87% of the prescription dose, while the urethral and maximum rectal doses were 107.3 and 68.7%, respectively. No appreciable difference in plan quality was observed with variation in maximum source dwell time. The step size plays a significant role in plan optimization for prostate implants. Our study supports use of a 0.5 cm step size for prostate implants. PMID:24049323
Coordination between posture and movement: interaction between postural and accuracy constraints.
Berrigan, Félix; Simoneau, Martin; Martin, Olivier; Teasdale, Normand
2006-04-01
We examined the interaction between the control of posture and an aiming movement. Balance control was varied by having subjects aim at a target from a seated or a standing position. The aiming difficulty was varied using a Fitts'-like paradigm (movement amplitude=30 cm; target widths=0.5, 1.0, 2.5 and 5 cm). For both postural conditions, all targets were within the reaching space in front of the subjects and kept at a fixed relative position with respect to the subjects' body. Hence, for a given target size, the aiming was differentiated only by the postural context (seated vs. upright standing). For both postural conditions, movement time (MT) followed the well-known Fitts' law, that is, it increased with a decreasing target size. For the smallest target width, however, the increased MT was greater when subjects were standing than when they were seated suggesting that the difficulty of the aiming task could not be determined solely by the target size. When standing, a coordination between the trunk and the arm was observed. Also, as the target size decreased, the center of pressure (CP) displacement increased without any increase in CP speed suggesting that the subjects were regulating their CP to provide a controlled referential to assist the hand movement. When seated, the CP kinematics was scaled with the hand movement kinematics. Increasing the index of difficulty led to a strong correlation between the hand speed and CP displacement and speed. The complex organization between posture and movement was revealed only by examining the specific interactions between speed-accuracy and postural constraints.
Multi-image acquisition-based distance sensor using agile laser spot beam.
Riza, Nabeel A; Amin, M Junaid
2014-09-01
We present a novel laser-based distance measurement technique that uses multiple-image-based spatial processing to enable distance measurements. Compared with the first-generation distance sensor using spatial processing, the modified sensor is no longer hindered by the classic Rayleigh axial resolution limit for the propagating laser beam at its minimum beam waist location. The proposed high-resolution distance sensor design uses an electronically controlled variable focus lens (ECVFL) in combination with an optical imaging device, such as a charged-coupled device (CCD), to produce and capture different laser spot size images on a target with these beam spot sizes different from the minimal spot size possible at this target distance. By exploiting the unique relationship of the target located spot sizes with the varying ECVFL focal length for each target distance, the proposed distance sensor can compute the target distance with a distance measurement resolution better than the axial resolution via the Rayleigh resolution criterion. Using a 30 mW 633 nm He-Ne laser coupled with an electromagnetically actuated liquid ECVFL, along with a 20 cm focal length bias lens, and using five spot images captured per target position by a CCD-based Nikon camera, a proof-of-concept proposed distance sensor is successfully implemented in the laboratory over target ranges from 10 to 100 cm with a demonstrated sub-cm axial resolution, which is better than the axial Rayleigh resolution limit at these target distances. Applications for the proposed potentially cost-effective distance sensor are diverse and include industrial inspection and measurement and 3D object shape mapping and imaging.
Suprathreshold contrast summation over area using drifting gratings.
McDougall, Thomas J; Dickinson, J Edwin; Badcock, David R
2018-04-01
This study investigated contrast summation over area for moving targets applied to a fixed-size contrast pedestal-a technique originally developed by Meese and Summers (2007) to demonstrate strong spatial summation of contrast for static patterns at suprathreshold contrast levels. Target contrast increments (drifting gratings) were applied to either the entire 20% contrast pedestal (a full fixed-size drifting grating), or in the configuration of a checkerboard pattern in which the target increment was applied to every alternate check region. These checked stimuli are known as "Battenberg patterns" and the sizes of the checks were varied (within a fixed overall area), across conditions, to measure summation behavior. Results showed that sensitivity to an increment covering the full pedestal was significantly higher than that for the Battenberg patterns (areal summation). Two observers showed strong summation across all check sizes (0.71°-3.33°), and for two other observers the summation ratio dropped to levels consistent with probability summation once check size reached 2.00°. Therefore, areal summation with moving targets does operate at high contrast, and is subserved by relatively large receptive fields covering a square area extending up to at least 3.33° × 3.33° for some observers. Previous studies in which the spatial structure of the pedestal and target covaried were unable to demonstrate spatial summation, potentially due to increasing amounts of suppression from gain-control mechanisms which increases as pedestal size increases. This study shows that when this is controlled, by keeping the pedestal the same across all conditions, extensive summation can be demonstrated.
Distribution of the two-sample t-test statistic following blinded sample size re-estimation.
Lu, Kaifeng
2016-05-01
We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Ngamjarus, Chetta; Chongsuvivatwong, Virasakdi; McNeil, Edward; Holling, Heinz
2017-01-01
Sample size determination usually is taught based on theory and is difficult to understand. Using a smartphone application to teach sample size calculation ought to be more attractive to students than using lectures only. This study compared levels of understanding of sample size calculations for research studies between participants attending a lecture only versus lecture combined with using a smartphone application to calculate sample sizes, to explore factors affecting level of post-test score after training sample size calculation, and to investigate participants’ attitude toward a sample size application. A cluster-randomized controlled trial involving a number of health institutes in Thailand was carried out from October 2014 to March 2015. A total of 673 professional participants were enrolled and randomly allocated to one of two groups, namely, 341 participants in 10 workshops to control group and 332 participants in 9 workshops to intervention group. Lectures on sample size calculation were given in the control group, while lectures using a smartphone application were supplied to the test group. Participants in the intervention group had better learning of sample size calculation (2.7 points out of maximnum 10 points, 95% CI: 24 - 2.9) than the participants in the control group (1.6 points, 95% CI: 1.4 - 1.8). Participants doing research projects had a higher post-test score than those who did not have a plan to conduct research projects (0.9 point, 95% CI: 0.5 - 1.4). The majority of the participants had a positive attitude towards the use of smartphone application for learning sample size calculation.
Kim, Hyoungrae; Jang, Cheongyun; Yadav, Dharmendra K; Kim, Mi-Hyun
2017-03-23
The accuracy of any 3D-QSAR, Pharmacophore and 3D-similarity based chemometric target fishing models are highly dependent on a reasonable sample of active conformations. Since a number of diverse conformational sampling algorithm exist, which exhaustively generate enough conformers, however model building methods relies on explicit number of common conformers. In this work, we have attempted to make clustering algorithms, which could find reasonable number of representative conformer ensembles automatically with asymmetric dissimilarity matrix generated from openeye tool kit. RMSD was the important descriptor (variable) of each column of the N × N matrix considered as N variables describing the relationship (network) between the conformer (in a row) and the other N conformers. This approach used to evaluate the performance of the well-known clustering algorithms by comparison in terms of generating representative conformer ensembles and test them over different matrix transformation functions considering the stability. In the network, the representative conformer group could be resampled for four kinds of algorithms with implicit parameters. The directed dissimilarity matrix becomes the only input to the clustering algorithms. Dunn index, Davies-Bouldin index, Eta-squared values and omega-squared values were used to evaluate the clustering algorithms with respect to the compactness and the explanatory power. The evaluation includes the reduction (abstraction) rate of the data, correlation between the sizes of the population and the samples, the computational complexity and the memory usage as well. Every algorithm could find representative conformers automatically without any user intervention, and they reduced the data to 14-19% of the original values within 1.13 s per sample at the most. The clustering methods are simple and practical as they are fast and do not ask for any explicit parameters. RCDTC presented the maximum Dunn and omega-squared values of the four algorithms in addition to consistent reduction rate between the population size and the sample size. The performance of the clustering algorithms was consistent over different transformation functions. Moreover, the clustering method can also be applied to molecular dynamics sampling simulation results.
Pion emission in α-particle interactions with various targets of nuclear emulsion detector
NASA Astrophysics Data System (ADS)
Abdelsalam, A.; Abou-Moussa, Z.; Rashed, N.; M. Badawy, B.; A. Amer, H.; Osman, W.; M. El-Ashmawy, M.; Abdallah, N.
2015-09-01
The behavior of relativistic hadron multiplicity for 4He-nucleus interactions is investigated. The experiment is carried out at 2.1 A and 3.7 A GeV (Dubna energy) to search for the incident energy effect on the interactions inside different emulsion target nuclei. Data are presented in terms of the number of emitted relativistic hadrons in both forward and backward angular zones. The dependence on the target size is presented. For this purpose the statistical events are discriminated into groups according to the interactions with H, CNO, Em, and AgBr target nuclei. The separation of events, into the mentioned groups, is executed based on Glauber's multiple scattering theory approach. Features suggestive of a decay mechanism seem to be a characteristic of the backward emission of relativistic hadrons. The results strongly support the assumption that the relativistic hadrons may already be emitted during the de-excitation of the excited target nucleus, in a behavior like that of compound-nucleus disintegration. Regarding the limiting fragmentation hypothesis beyond 1 A GeV, the target size is the main parameter affecting the backward production of the relativistic hadron. The incident energy is a principal factor responsible for the forward relativistic hadron production, implying that this system of particle production is a creation system. However, the target size is an effective parameter as well as the projectile size considering the geometrical concept regarded in the nuclear fireball model. The data are analyzed in the framework of the FRITIOF model.
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2011-01-01
Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…
Sample Size Determination for Regression Models Using Monte Carlo Methods in R
ERIC Educational Resources Information Center
Beaujean, A. Alexander
2014-01-01
A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…
Yang, Shan; Al-Hashimi, Hashim M.
2016-01-01
A growing number of studies employ time-averaged experimental data to determine dynamic ensembles of biomolecules. While it is well known that different ensembles can satisfy experimental data to within error, the extent and nature of these degeneracies, and their impact on the accuracy of the ensemble determination remains poorly understood. Here, we use simulations and a recently introduced metric for assessing ensemble similarity to explore degeneracies in determining ensembles using NMR residual dipolar couplings (RDCs) with specific application to A-form helices in RNA. Various target ensembles were constructed representing different domain-domain orientational distributions that are confined to a topologically restricted (<10%) conformational space. Five independent sets of ensemble averaged RDCs were then computed for each target ensemble and a ‘sample and select’ scheme used to identify degenerate ensembles that satisfy RDCs to within experimental uncertainty. We find that ensembles with different ensemble sizes and that can differ significantly from the target ensemble (by as much as ΣΩ ~ 0.4 where ΣΩ varies between 0 and 1 for maximum and minimum ensemble similarity, respectively) can satisfy the ensemble averaged RDCs. These deviations increase with the number of unique conformers and breadth of the target distribution, and result in significant uncertainty in determining conformational entropy (as large as 5 kcal/mol at T = 298 K). Nevertheless, the RDC-degenerate ensembles are biased towards populated regions of the target ensemble, and capture other essential features of the distribution, including the shape. Our results identify ensemble size as a major source of uncertainty in determining ensembles and suggest that NMR interactions such as RDCs and spin relaxation, on their own, do not carry the necessary information needed to determine conformational entropy at a useful level of precision. The framework introduced here provides a general approach for exploring degeneracies in ensemble determination for different types of experimental data. PMID:26131693