Forcino, Frank L; Leighton, Lindsey R; Twerdy, Pamela; Cahill, James F
2015-01-01
Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis) to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample) also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1) to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2) to determine the dataset parameters (i.e., evenness, number of taxa, number of samples) that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project), statistically viable results can still be obtained with less of an investment.
Sample size determination for mediation analysis of longitudinal data.
Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying
2018-03-27
Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.
2017-01-01
A large body of evidence supports the effectiveness of larger health warnings on cigarette packages. However, there is limited research examining attitudes toward such warning labels, which has potential implications for implementation of larger warning labels. The purpose of the current study was to examine attitudes toward larger warning sizes on cigarette packages and examine variables associated with more favorable attitudes. In a nationally representative survey of U.S. adults (N = 5,014), participants were randomized to different warning size conditions, assessing attitude toward “a health warning that covered (25, 50, 75) % of a cigarette pack.” SAS logistic regression survey procedures were used to account for the complex survey design and sampling weights. Across experimental groups, nearly three-quarters (72%) of adults had attitudes supportive of larger warning labels on cigarette packs. Among the full sample and smokers only (N = 1,511), most adults had favorable attitudes toward labels that covered 25% (78.2% and 75.2%, respectively), 50% (70% and 58.4%, respectively), and 75% (67.9% and 61%, respectively) of a cigarette pack. Young adults, females, racial/ethnic minorities, and non-smokers were more likely to have favorable attitudes toward larger warning sizes. Among smokers only, females and those with higher quit intentions held more favorable attitudes toward larger warning sizes. Widespread support exists for larger warning labels on cigarette packages among U.S. adults, including among smokers. Our findings support the implementation of larger health warnings on cigarette packs in the U.S. as required by the 2009 Tobacco Control Act. PMID:28253257
Meta-analysis of genome-wide association from genomic prediction models
USDA-ARS?s Scientific Manuscript database
A limitation of many genome-wide association studies (GWA) in animal breeding is that there are many loci with small effect sizes; thus, larger sample sizes (N) are required to guarantee suitable power of detection. To increase sample size, results from different GWA can be combined in a meta-analys...
Almutairy, Meznah; Torng, Eric
2018-01-01
Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.
Torng, Eric
2018-01-01
Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989
The development of rapid assessment methods has become a priority for many organizations that want to report on the condition of wetlands at larger scales requiring many sampling sites. To have faith in these rapid methods, however, requires that they be verified with more compr...
Corstjens, Paul L A M; Nyakundi, Ruth K; de Dood, Claudia J; Kariuki, Thomas M; Ochola, Elizabeth A; Karanja, Diana M S; Mwinzi, Pauline N M; van Dam, Govert J
2015-04-22
Accurate determination of Schistosoma infection rates in low endemic regions to examine progress towards interruption of transmission and elimination requires highly sensitive diagnostic tools. An existing lateral flow (LF) based test demonstrating ongoing infections through detection of worm circulating anodic antigen (CAA), was improved for sensitivity through implementation of a protocol allowing increased sample input. Urine is the preferred sample as collection is non-invasive and sample volume is generally not a restriction. Centrifugal filtration devices provided a method to concentrate supernatant of urine samples extracted with trichloroacetic acid (TCA). For field trials a practical sample volume of 2 mL urine allowed detection of CAA down to 0.3 pg/mL. The method was evaluated on a set of urine samples (n = 113) from an S. mansoni endemic region (Kisumu, Kenya) and compared to stool microscopy (Kato Katz, KK). In this analysis true positivity was defined as a sample with either a positive KK or UCAA test. Implementation of the concentration method increased clinical sensitivity (Sn) from 44 to 98% when moving from the standard 10 μL (UCAA10 assay) to 2000 μL (UCAA2000 assay) urine sample input. Sn for KK varied between 23 and 35% for a duplicate KK (single stool, two slides) to 52% for a six-fold KK (three consecutive day stools, two slides). The UCAA2000 assay indicated 47 positive samples with CAA concentration above 0.3 pg/mL. The six-fold KK detected 25 egg positives; 1 sample with 2 eggs detected in the 6-fold KK was not identified with the UCAA2000 assay. Larger sample input increased Sn of the UCAA assay to a level indicating 'true' infection. Only a single 2 mL urine sample is needed, but analysing larger sample volumes could still increase test accuracy. The UCAA2000 test is an appropriate candidate for accurate identification of all infected individuals in low-endemic regions. Assay materials do not require refrigeration and collected urine samples may be stored and transported to central test laboratories without the need to be frozen.
Reproducibility of preclinical animal research improves with heterogeneity of study samples
Vogt, Lucile; Sena, Emily S.; Würbel, Hanno
2018-01-01
Single-laboratory studies conducted under highly standardized conditions are the gold standard in preclinical animal research. Using simulations based on 440 preclinical studies across 13 different interventions in animal models of stroke, myocardial infarction, and breast cancer, we compared the accuracy of effect size estimates between single-laboratory and multi-laboratory study designs. Single-laboratory studies generally failed to predict effect size accurately, and larger sample sizes rendered effect size estimates even less accurate. By contrast, multi-laboratory designs including as few as 2 to 4 laboratories increased coverage probability by up to 42 percentage points without a need for larger sample sizes. These findings demonstrate that within-study standardization is a major cause of poor reproducibility. More representative study samples are required to improve the external validity and reproducibility of preclinical animal research and to prevent wasting animals and resources for inconclusive research. PMID:29470495
Khoo, T-L; Xiros, N; Guan, F; Orellana, D; Holst, J; Joshua, D E; Rasko, J E J
2013-08-01
The CELL-DYN Emerald is a compact bench-top hematology analyzer that can be used for a three-part white cell differential analysis. To determine its utility for analysis of human and mouse samples, we evaluated this machine against the larger CELL-DYN Sapphire and Sysmex XT2000iV hematology analyzers. 120 human (normal and abnormal) and 30 mouse (normal and abnormal) samples were analyzed on both the CELL-DYN Emerald and CELL-DYN Sapphire or Sysmex XT2000iV analyzers. For mouse samples, the CELL-DYN Emerald analyzer required manual recalibration based on the histogram populations. Analysis of the CELL-DYN Emerald showed excellent precision, within accepted ranges (white cell count CV% = 2.09%; hemoglobin CV% = 1.68%; platelets CV% = 4.13%). Linearity was excellent (R² ≥ 0.99), carryover was minimal (<1%), and overall interinstrument agreement was acceptable for both human and mouse samples. Comparison between the CELL-DYN Emerald and Sapphire analyzers for human samples or Sysmex XT2000iV analyzer for mouse samples showed excellent correlation for all parameters. The CELL-DYN Emerald was generally comparable to the larger reference analyzer for both human and mouse samples. It would be suitable for use in satellite research laboratories or as a backup system in larger laboratories. © 2012 John Wiley & Sons Ltd.
Precision of channel catfish catch estimates using hoop nets in larger Oklahoma reservoirs
Stewart, David R.; Long, James M.
2012-01-01
Hoop nets are rapidly becoming the preferred gear type used to sample channel catfish Ictalurus punctatus, and many managers have reported that hoop nets effectively sample channel catfish in small impoundments (<200 ha). However, the utility and precision of this approach in larger impoundments have not been tested. We sought to determine how the number of tandem hoop net series affected the catch of channel catfish and the time involved in using 16 tandem hoop net series in larger impoundments (>200 ha). Hoop net series were fished once, set for 3 d; then we used Monte Carlo bootstrapping techniques that allowed us to estimate the number of net series required to achieve two levels of precision (relative standard errors [RSEs] of 15 and 25) at two levels of confidence (80% and 95%). Sixteen hoop net series were effective at obtaining an RSE of 25 with 80% and 95% confidence in all but one reservoir. Achieving an RSE of 15 was often less effective and required 18-96 hoop net series given the desired level of confidence. We estimated that an hour was needed, on average, to deploy and retrieve three hoop net series, which meant that 16 hoop net series per reservoir could be "set" and "retrieved" within a day, respectively. The estimated number of net series to achieve an RSE of 25 or 15 was positively associated with the coefficient of variation (CV) of the sample but not with reservoir surface area or relative abundance. Our results suggest that hoop nets are capable of providing reasonably precise estimates of channel catfish relative abundance and that the relationship with the CV of the sample reported herein can be used to determine the sampling effort for a desired level of precision.
Image analysis of representative food structures: application of the bootstrap method.
Ramírez, Cristian; Germain, Juan C; Aguilera, José M
2009-08-01
Images (for example, photomicrographs) are routinely used as qualitative evidence of the microstructure of foods. In quantitative image analysis it is important to estimate the area (or volume) to be sampled, the field of view, and the resolution. The bootstrap method is proposed to estimate the size of the sampling area as a function of the coefficient of variation (CV(Bn)) and standard error (SE(Bn)) of the bootstrap taking sub-areas of different sizes. The bootstrap method was applied to simulated and real structures (apple tissue). For simulated structures, 10 computer-generated images were constructed containing 225 black circles (elements) and different coefficient of variation (CV(image)). For apple tissue, 8 images of apple tissue containing cellular cavities with different CV(image) were analyzed. Results confirmed that for simulated and real structures, increasing the size of the sampling area decreased the CV(Bn) and SE(Bn). Furthermore, there was a linear relationship between the CV(image) and CV(Bn) (.) For example, to obtain a CV(Bn) = 0.10 in an image with CV(image) = 0.60, a sampling area of 400 x 400 pixels (11% of whole image) was required, whereas if CV(image) = 1.46, a sampling area of 1000 x 100 pixels (69% of whole image) became necessary. This suggests that a large-size dispersion of element sizes in an image requires increasingly larger sampling areas or a larger number of images.
Single point estimation of phenytoin dosing: a reappraisal.
Koup, J R; Gibaldi, M; Godolphin, W
1981-11-01
A previously proposed method for estimation of phenytoin dosing requirement using a single serum sample obtained 24 hours after intravenous loading dose (18 mg/Kg) has been re-evaluated. Using more realistic values for the volume of distribution of phenytoin (0.4 to 1.2 L/Kg), simulations indicate that the proposed method will fail to consistently predict dosage requirements. Additional simulations indicate that two samples obtained during the 24 hour interval following the iv loading dose could be used to more reliably predict phenytoin dose requirement. Because of the nonlinear relationship which exists between phenytoin dose administration rate (RO) and the mean steady state serum concentration (CSS), small errors in prediction of the required RO result in much larger errors in CSS.
Microfluidic-Based sample chips for radioactive solutions
Tripp, J. L.; Law, J. D.; Smith, T. E.; ...
2015-01-01
Historical nuclear fuel cycle process sampling techniques required sample volumes ranging in the tens of milliliters. The radiation levels experienced by analytical personnel and equipment, in addition to the waste volumes generated from analysis of these samples, have been significant. These sample volumes also impacted accountability inventories of required analytes during process operations. To mitigate radiation dose and other issues associated with the historically larger sample volumes, a microcapillary sample chip was chosen for further investigation. The ability to obtain microliter volume samples coupled with a remote automated means of sample loading, tracking, and transporting to the analytical instrument wouldmore » greatly improve analytical efficiency while reducing both personnel exposure and radioactive waste volumes. Sample chip testing was completed to determine the accuracy, repeatability, and issues associated with the use of microfluidic sample chips used to supply µL sample volumes of lanthanide analytes dissolved in nitric acid for introduction to an analytical instrument for elemental analysis.« less
MicroRaman measurements for nuclear fuel reprocessing applications
Casella, Amanda; Lines, Amanda; Nelson, Gilbert; ...
2016-12-01
Treatment and reuse of used nuclear fuel is a key component in closing the nuclear fuel cycle. Solvent extraction reprocessing methods that have been developed contain various steps tailored to the separation of specific radionuclides, which are highly dependent upon solution properties. The instrumentation used to monitor these processes must be robust, require little or no maintenance, and be able to withstand harsh environments such as high radiation fields and aggressive chemical matrices. Our group has been investigating the use of optical spectroscopy for the on-line monitoring of actinides, lanthanides, and acid strength within fuel reprocessing streams. This paper willmore » focus on the development and application of a new MicroRaman probe for on-line real-time monitoring of the U(VI)/nitrate ion/nitric acid in solutions relevant to used nuclear fuel reprocessing. Previous research has successfully demonstrated the applicability on the macroscopic scale, using sample probes requiring larger solution volumes. In an effort to minimize waste and reduce dose to personnel, we have modified this technique to allow measurement at the microfluidic scale using a Raman microprobe. Under the current sampling environment, Raman samples typically require upwards of 10 mL and larger. Using the new sampling system, we can sample volumes at 10 μL or less, which is a scale reduction of over 1,000 fold in sample size. Finally, this paper will summarize our current work in this area including: comparisons between the macroscopic and microscopic probes for detection limits, optimized channel focusing, and application in a flow cell with varying levels of HNO 3, and UO 2(NO 3) 2.« less
Evaluation of the Biological Sampling Kit (BiSKit) for Large-Area Surface Sampling
Buttner, Mark P.; Cruz, Patricia; Stetzenbach, Linda D.; Klima-Comba, Amy K.; Stevens, Vanessa L.; Emanuel, Peter A.
2004-01-01
Current surface sampling methods for microbial contaminants are designed to sample small areas and utilize culture analysis. The total number of microbes recovered is low because a small area is sampled, making detection of a potential pathogen more difficult. Furthermore, sampling of small areas requires a greater number of samples to be collected, which delays the reporting of results, taxes laboratory resources and staffing, and increases analysis costs. A new biological surface sampling method, the Biological Sampling Kit (BiSKit), designed to sample large areas and to be compatible with testing with a variety of technologies, including PCR and immunoassay, was evaluated and compared to other surface sampling strategies. In experimental room trials, wood laminate and metal surfaces were contaminated by aerosolization of Bacillus atrophaeus spores, a simulant for Bacillus anthracis, into the room, followed by settling of the spores onto the test surfaces. The surfaces were sampled with the BiSKit, a cotton-based swab, and a foam-based swab. Samples were analyzed by culturing, quantitative PCR, and immunological assays. The results showed that the large surface area (1 m2) sampled with the BiSKit resulted in concentrations of B. atrophaeus in samples that were up to 10-fold higher than the concentrations obtained with the other methods tested. A comparison of wet and dry sampling with the BiSKit indicated that dry sampling was more efficient (efficiency, 18.4%) than wet sampling (efficiency, 11.3%). The sensitivities of detection of B. atrophaeus on metal surfaces were 42 ± 5.8 CFU/m2 for wet sampling and 100.5 ± 10.2 CFU/m2 for dry sampling. These results demonstrate that the use of a sampling device capable of sampling larger areas results in higher sensitivity than that obtained with currently available methods and has the advantage of sampling larger areas, thus requiring collection of fewer samples per site. PMID:15574898
Imaging samples larger than the field of view: the SLS experience
NASA Astrophysics Data System (ADS)
Vogiatzis Oikonomidis, Ioannis; Lovric, Goran; Cremona, Tiziana P.; Arcadu, Filippo; Patera, Alessandra; Schittny, Johannes C.; Stampanoni, Marco
2017-06-01
Volumetric datasets with micrometer spatial and sub-second temporal resolutions are nowadays routinely acquired using synchrotron X-ray tomographic microscopy (SRXTM). Although SRXTM technology allows the examination of multiple samples with short scan times, many specimens are larger than the field-of-view (FOV) provided by the detector. The extension of the FOV in the direction perpendicular to the rotation axis remains non-trivial. We present a method that can efficiently increase the FOV merging volumetric datasets obtained by region-of-interest tomographies in different 3D positions of the sample with a minimal amount of artefacts and with the ability to handle large amounts of data. The method has been successfully applied for the three-dimensional imaging of a small number of mouse lung acini of intact animals, where pixel sizes down to the micrometer range and short exposure times are required.
Effect of Common Cryoprotectants on Critical Warming Rates and Ice Formation in Aqueous Solutions
Hopkins, Jesse B.; Badeau, Ryan; Warkentin, Matthew; Thorne, Robert E.
2012-01-01
Ice formation on warming is of comparable or greater importance to ice formation on cooling in determining survival of cryopreserved samples. Critical warming rates required for ice-free warming of vitrified aqueous solutions of glycerol, dimethyl sulfoxide, ethylene glycol, polyethylene glycol 200 and sucrose have been measured for warming rates of order 10 to 104 K/s. Critical warming rates are typically one to three orders of magnitude larger than critical cooling rates. Warming rates vary strongly with cooling rates, perhaps due to the presence of small ice fractions in nominally vitrified samples. Critical warming and cooling rate data spanning orders of magnitude in rates provide rigorous tests of ice nucleation and growth models and their assumed input parameters. Current models with current best estimates for input parameters provide a reasonable account of critical warming rates for glycerol solutions at high concentrations/low rates, but overestimate both critical warming and cooling rates by orders of magnitude at lower concentrations and larger rates. In vitrification protocols, minimizing concentrations of potentially damaging cryoprotectants while minimizing ice formation will require ultrafast warming rates, as well as fast cooling rates to minimize the required warming rates. PMID:22728046
Emperical Tests of Acceptance Sampling Plans
NASA Technical Reports Server (NTRS)
White, K. Preston, Jr.; Johnson, Kenneth L.
2012-01-01
Acceptance sampling is a quality control procedure applied as an alternative to 100% inspection. A random sample of items is drawn from a lot to determine the fraction of items which have a required quality characteristic. Both the number of items to be inspected and the criterion for determining conformance of the lot to the requirement are given by an appropriate sampling plan with specified risks of Type I and Type II sampling errors. In this paper, we present the results of empirical tests of the accuracy of selected sampling plans reported in the literature. These plans are for measureable quality characteristics which are known have either binomial, exponential, normal, gamma, Weibull, inverse Gaussian, or Poisson distributions. In the main, results support the accepted wisdom that variables acceptance plans are superior to attributes (binomial) acceptance plans, in the sense that these provide comparable protection against risks at reduced sampling cost. For the Gaussian and Weibull plans, however, there are ranges of the shape parameters for which the required sample sizes are in fact larger than the corresponding attributes plans, dramatically so for instances of large skew. Tests further confirm that the published inverse-Gaussian (IG) plan is flawed, as reported by White and Johnson (2011).
Flow cytometer measurement of binding assays
Saunders, George C.
1987-01-01
A method of measuring the result of a binding assay that does not require separation of fluorescent smaller particles is disclosed. In a competitive binding assay the smaller fluorescent particles coated with antigen compete with antigen in the sample being analyzed for available binding sites on larger particles. In a sandwich assay, the smaller, fluorescent spheres coated with antibody attach themselves to molecules containing antigen that are attached to larger spheres coated with the same antibody. The separation of unattached, fluorescent smaller particles is made unnecessary by only counting the fluorescent events triggered by the laser of a flow cytometer when the event is caused by a particle with a light scatter measurement within a certain range corresponding to the presence of larger particles.
Pituitary gland volumes in bipolar disorder.
Clark, Ian A; Mackay, Clare E; Goodwin, Guy M
2014-12-01
Bipolar disorder has been associated with increased Hypothalamic-Pituitary-Adrenal axis function. The mechanism is not well understood, but there may be associated increases in pituitary gland volume (PGV) and these small increases may be functionally significant. However, research investigating PGV in bipolar disorder reports mixed results. The aim of the current study was twofold. First, to assess PGV in two novel samples of patients with bipolar disorder and matched healthy controls. Second, to perform a meta-analysis comparing PGV across a larger sample of patients and matched controls. Sample 1 consisted of 23 established patients and 32 matched controls. Sample 2 consisted of 39 medication-naïve patients and 42 matched controls. PGV was measured on structural MRI scans. Seven further studies were identified comparing PGV between patients and matched controls (total n; 244 patients, 308 controls). Both novel samples showed a small (approximately 20mm(3) or 4%), but non-significant, increase in PGV in patients. Combining the two novel samples showed a significant association of age and PGV. Meta-analysis showed a trend towards a larger pituitary gland in patients (effect size: .23, CI: -.14, .59). While results suggest a possible small difference in pituitary gland volume between patients and matched controls, larger mega-analyses with sample sizes greater even than those used in the current meta-analysis are still required. There is a small but potentially functionally significant increase in PGV in patients with bipolar disorder compared to controls. Results demonstrate the difficulty of finding potentially important but small effects in functional brain disorders. Copyright © 2014 Elsevier B.V. All rights reserved.
Influences of misprediction costs on solar flare prediction
NASA Astrophysics Data System (ADS)
Huang, Xin; Wang, HuaNing; Dai, XingHua
2012-10-01
The mispredictive costs of flaring and non-flaring samples are different for different applications of solar flare prediction. Hence, solar flare prediction is considered a cost sensitive problem. A cost sensitive solar flare prediction model is built by modifying the basic decision tree algorithm. Inconsistency rate with the exhaustive search strategy is used to determine the optimal combination of magnetic field parameters in an active region. These selected parameters are applied as the inputs of the solar flare prediction model. The performance of the cost sensitive solar flare prediction model is evaluated for the different thresholds of solar flares. It is found that more flaring samples are correctly predicted and more non-flaring samples are wrongly predicted with the increase of the cost for wrongly predicting flaring samples as non-flaring samples, and the larger cost of wrongly predicting flaring samples as non-flaring samples is required for the higher threshold of solar flares. This can be considered as the guide line for choosing proper cost to meet the requirements in different applications.
Fazey, Francesca M C; Ryan, Peter G
2016-03-01
Recent estimates suggest that roughly 100 times more plastic litter enters the sea than is found floating at the sea surface, despite the buoyancy and durability of many plastic polymers. Biofouling by marine biota is one possible mechanism responsible for this discrepancy. Microplastics (<5 mm in diameter) are more scarce than larger size classes, which makes sense because fouling is a function of surface area whereas buoyancy is a function of volume; the smaller an object, the greater its relative surface area. We tested whether plastic items with high surface area to volume ratios sank more rapidly by submerging 15 different sizes of polyethylene samples in False Bay, South Africa, for 12 weeks to determine the time required for samples to sink. All samples became sufficiently fouled to sink within the study period, but small samples lost buoyancy much faster than larger ones. There was a direct relationship between sample volume (buoyancy) and the time to attain a 50% probability of sinking, which ranged from 17 to 66 days of exposure. Our results provide the first estimates of the longevity of different sizes of plastic debris at the ocean surface. Further research is required to determine how fouling rates differ on free floating debris in different regions and in different types of marine environments. Such estimates could be used to improve model predictions of the distribution and abundance of floating plastic debris globally. Copyright © 2016 Elsevier Ltd. All rights reserved.
Using sampling theory as the basis for a conceptual data model
Fred C. Martin; Tonya Baggett; Tom Wolfe
2000-01-01
Greater demands on forest resources require that larger amounts of information be readily available to decisionmakers. To provide more information faster, databases must be developed that are more comprehensive and easier to use. Data modeling is a process for building more complete and flexible databases by emphasizing fundamental relationships over existing or...
Computer System Resource Requirements of Novice Programming Students.
ERIC Educational Resources Information Center
Nutt, Gary J.
The characteristics of jobs that constitute the mix for lower division FORTRAN classes in a university were investigated. Samples of these programs were also benchmarked on a larger central site computer and two minicomputer systems. It was concluded that a carefully chosen minicomputer system could offer service at least the equivalent of the…
Mediterranean diet, micronutrients and macronutrients, and MRI measures of cortical thickness.
Staubo, Sara C; Aakre, Jeremiah A; Vemuri, Prashanthi; Syrjanen, Jeremy A; Mielke, Michelle M; Geda, Yonas E; Kremers, Walter K; Machulda, Mary M; Knopman, David S; Petersen, Ronald C; Jack, Clifford R; Roberts, Rosebud O
2017-02-01
The Mediterranean diet (MeDi) is associated with reduced risk of cognitive impairment, but it is unclear whether it is associated with better brain imaging biomarkers. Among 672 cognitively normal participants (mean age, 79.8 years, 52.5% men), we investigated associations of MeDi score and MeDi components with magnetic resonance imaging measures of cortical thickness for the four lobes separately and averaged (average lobar). Higher MeDi score was associated with larger frontal, parietal, occipital, and average lobar cortical thickness. Higher legume and fish intakes were associated with larger cortical thickness: legumes with larger superior parietal, inferior parietal, precuneus, parietal, occipital, lingual, and fish with larger precuneus, superior parietal, posterior cingulate, parietal, and inferior parietal. Higher carbohydrate and sugar intakes were associated with lower entorhinal cortical thickness. In this sample of elderly persons, higher adherence to MeDi was associated with larger cortical thickness. These cross-sectional findings require validation in prospective studies. Copyright © 2016 the Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
Dodd, Lori E; Korn, Edward L; Freidlin, Boris; Gu, Wenjuan; Abrams, Jeffrey S; Bushnell, William D; Canetta, Renzo; Doroshow, James H; Gray, Robert J; Sridhara, Rajeshwari
2013-10-01
Measurement error in time-to-event end points complicates interpretation of treatment effects in clinical trials. Non-differential measurement error is unlikely to produce large bias [1]. When error depends on treatment arm, bias is of greater concern. Blinded-independent central review (BICR) of all images from a trial is commonly undertaken to mitigate differential measurement-error bias that may be present in hazard ratios (HRs) based on local evaluations. Similar BICR and local evaluation HRs may provide reassurance about the treatment effect, but BICR adds considerable time and expense to trials. We describe a BICR audit strategy [2] and apply it to five randomized controlled trials to evaluate its use and to provide practical guidelines. The strategy requires BICR on a subset of study subjects, rather than a complete-case BICR, and makes use of an auxiliary-variable estimator. When the effect size is relatively large, the method provides a substantial reduction in the size of the BICRs. In a trial with 722 participants and a HR of 0.48, an average audit of 28% of the data was needed and always confirmed the treatment effect as assessed by local evaluations. More moderate effect sizes and/or smaller trial sizes required larger proportions of audited images, ranging from 57% to 100% for HRs ranging from 0.55 to 0.77 and sample sizes between 209 and 737. The method is developed for a simple random sample of study subjects. In studies with low event rates, more efficient estimation may result from sampling individuals with events at a higher rate. The proposed strategy can greatly decrease the costs and time associated with BICR, by reducing the number of images undergoing review. The savings will depend on the underlying treatment effect and trial size, with larger treatment effects and larger trials requiring smaller proportions of audited data.
Biostatistics Series Module 5: Determining Sample Size
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437
Non-linear mixing effects on mass-47 CO2 clumped isotope thermometry: Patterns and implications.
Defliese, William F; Lohmann, Kyger C
2015-05-15
Mass-47 CO(2) clumped isotope thermometry requires relatively large (~20 mg) samples of carbonate minerals due to detection limits and shot noise in gas source isotope ratio mass spectrometry (IRMS). However, it is unreasonable to assume that natural geologic materials are homogenous on the scale required for sampling. We show that sample heterogeneities can cause offsets from equilibrium Δ(47) values that are controlled solely by end member mixing and are independent of equilibrium temperatures. A numerical model was built to simulate and quantify the effects of end member mixing on Δ(47). The model was run in multiple possible configurations to produce a dataset of mixing effects. We verified that the model accurately simulated real phenomena by comparing two artificial laboratory mixtures measured using IRMS to model output. Mixing effects were found to be dependent on end member isotopic composition in δ(13)C and δ(18)O values, and independent of end member Δ(47) values. Both positive and negative offsets from equilibrium Δ(47) can occur, and the sign is dependent on the interaction between end member isotopic compositions. The overall magnitude of mixing offsets is controlled by the amount of variability within a sample; the larger the disparity between end member compositions, the larger the mixing offset. Samples varying by less than 2 ‰ in both δ(13)C and δ(18)O values have mixing offsets below current IRMS detection limits. We recommend the use of isotopic subsampling for δ(13)C and δ(18)O values to determine sample heterogeneity, and to evaluate any potential mixing effects in samples suspected of being heterogonous. Copyright © 2015 John Wiley & Sons, Ltd.
Technique for Performing Dielectric Property Measurements at Microwave Frequencies
NASA Technical Reports Server (NTRS)
Barmatz, Martin B.; Jackson, Henry W.
2010-01-01
A paper discusses the need to perform accurate dielectric property measurements on larger sized samples, particularly liquids at microwave frequencies. These types of measurements cannot be obtained using conventional cavity perturbation methods, particularly for liquids or powdered or granulated solids that require a surrounding container. To solve this problem, a model has been developed for the resonant frequency and quality factor of a cylindrical microwave cavity containing concentric cylindrical samples. This model can then be inverted to obtain the real and imaginary dielectric constants of the material of interest. This approach is based on using exact solutions to Maxwell s equations for the resonant properties of a cylindrical microwave cavity and also using the effective electrical conductivity of the cavity walls that is estimated from the measured empty cavity quality factor. This new approach calculates the complex resonant frequency and associated electromagnetic fields for a cylindrical microwave cavity with lossy walls that is loaded with concentric, axially aligned, lossy dielectric cylindrical samples. In this approach, the calculated complex resonant frequency, consisting of real and imaginary parts, is related to the experimentally measured quantities. Because this approach uses Maxwell's equations to determine the perturbed electromagnetic fields in the cavity with the material(s) inserted, one can calculate the expected wall losses using the fields for the loaded cavity rather than just depending on the value of the fields obtained from the empty cavity quality factor. These additional calculations provide a more accurate determination of the complex dielectric constant of the material being studied. The improved approach will be particularly important when working with larger samples or samples with larger dielectric constants that will further perturb the cavity electromagnetic fields. Also, this approach enables the ability to have a larger sample of interest, such as a liquid or powdered or granulated solid, inside a cylindrical container.
Breaking Free of Sample Size Dogma to Perform Innovative Translational Research
Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.
2011-01-01
Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197
DOE Office of Scientific and Technical Information (OSTI.GOV)
KLARER,PAUL R.; BINDER,ALAN B.; LENARD,ROGER X.
A preliminary set of requirements for a robotic rover mission to the lunar polar region are described and assessed. Tasks to be performed by the rover include core drill sample acquisition, mineral and volatile soil content assay, and significant wide area traversals. Assessment of the postulated requirements is performed using first order estimates of energy, power, and communications throughput issues. Two potential rover system configurations are considered, a smaller rover envisioned as part of a group of multiple rovers, and a larger single rover envisioned along more traditional planetary surface rover concept lines.
Design of Phase II Non-inferiority Trials.
Jung, Sin-Ho
2017-09-01
With the development of inexpensive treatment regimens and less invasive surgical procedures, we are confronted with non-inferiority study objectives. A non-inferiority phase III trial requires a roughly four times larger sample size than that of a similar standard superiority trial. Because of the large required sample size, we often face feasibility issues to open a non-inferiority trial. Furthermore, due to lack of phase II non-inferiority trial design methods, we do not have an opportunity to investigate the efficacy of the experimental therapy through a phase II trial. As a result, we often fail to open a non-inferiority phase III trial and a large number of non-inferiority clinical questions still remain unanswered. In this paper, we want to develop some designs for non-inferiority randomized phase II trials with feasible sample sizes. At first, we review a design method for non-inferiority phase III trials. Subsequently, we propose three different designs for non-inferiority phase II trials that can be used under different settings. Each method is demonstrated with examples. Each of the proposed design methods is shown to require a reasonable sample size for non-inferiority phase II trials. The three different non-inferiority phase II trial designs are used under different settings, but require similar sample sizes that are typical for phase II trials.
Maharjan, B; Kelly-Cirino, C D; Weirich, A; Curry, P S; Hoffman, H; Avsar, K; Shrestha, B
2016-12-01
German Nepal TB Project, National Tuberculosis Reference Laboratory, Kathmandu, Nepal. To evaluate whether transporting samples in OMNIgene®•SPUTUM (OM-S) reagent from a peripheral collection site to a central laboratory in Nepal can improve tuberculosis (TB) detection and increase the sensitivity of Xpert® MTB/RIF testing. One hundred sputum samples were split manually. Each portion was assigned to the OM-S group (OM-S added at collection, airline-couriered without cold chain, no other processing required) or the standard-of-care (SOC) group (samples airline-couriered on ice, sodium hydroxide + N-acetyl-L-cysteine processing required at the laboratory). Smear microscopy and Xpert testing were performed. Transport time was 2-13 days. Overall smear results were comparable (respectively 58% and 56% smear-negative results in the OM-S and SOC groups). The rate of smear-positive, Mycobacterium tuberculosis-positive (MTB+) sample detection was identical for both treatment groups, at 95%. More smear-negative MTB+ samples were detected in the OM-S group (17% vs. 13%, P = 0.0655). Sputum samples treated with OM-S can undergo multiday ambient-temperature transport and yield comparable smear and Xpert results to those of SOC samples. Further investigation with larger sample sizes is required to assess whether treating sputum samples with OM-S could increase the sensitivity of Xpert testing in smear-negative samples.
[Theory, method and application of method R on estimation of (co)variance components].
Liu, Wen-Zhong
2004-07-01
Theory, method and application of Method R on estimation of (co)variance components were reviewed in order to make the method be reasonably used. Estimation requires R values,which are regressions of predicted random effects that are calculated using complete dataset on predicted random effects that are calculated using random subsets of the same data. By using multivariate iteration algorithm based on a transformation matrix,and combining with the preconditioned conjugate gradient to solve the mixed model equations, the computation efficiency of Method R is much improved. Method R is computationally inexpensive,and the sampling errors and approximate credible intervals of estimates can be obtained. Disadvantages of Method R include a larger sampling variance than other methods for the same data,and biased estimates in small datasets. As an alternative method, Method R can be used in larger datasets. It is necessary to study its theoretical properties and broaden its application range further.
40 CFR 761.353 - Second level of sample selection.
Code of Federal Regulations, 2012 CFR
2012-07-01
... reduction is to limit the amount of time required to manually cut up larger particles of the waste to pass through a 9.5 millimeter (mm) screen. (a) Selecting a portion of the subsample for particle size reduction... table to select one of these quarters. (b) Reduction of the particle size by the use of a 9.5 mm screen...
40 CFR 761.353 - Second level of sample selection.
Code of Federal Regulations, 2014 CFR
2014-07-01
... reduction is to limit the amount of time required to manually cut up larger particles of the waste to pass through a 9.5 millimeter (mm) screen. (a) Selecting a portion of the subsample for particle size reduction... table to select one of these quarters. (b) Reduction of the particle size by the use of a 9.5 mm screen...
40 CFR 761.353 - Second level of sample selection.
Code of Federal Regulations, 2013 CFR
2013-07-01
... reduction is to limit the amount of time required to manually cut up larger particles of the waste to pass through a 9.5 millimeter (mm) screen. (a) Selecting a portion of the subsample for particle size reduction... table to select one of these quarters. (b) Reduction of the particle size by the use of a 9.5 mm screen...
A New Electromagnetic Instrument for Thickness Gauging of Conductive Materials
NASA Technical Reports Server (NTRS)
Fulton, J. P.; Wincheski, B.; Nath, S.; Reilly, J.; Namkung, M.
1994-01-01
Eddy current techniques are widely used to measure the thickness of electrically conducting materials. The approach, however, requires an extensive set of calibration standards and can be quite time consuming to set up and perform. Recently, an electromagnetic sensor was developed which eliminates the need for impedance measurements. The ability to monitor the magnitude of a voltage output independent of the phase enables the use of extremely simple instrumentation. Using this new sensor a portable hand-held instrument was developed. The device makes single point measurements of the thickness of nonferromagnetic conductive materials. The technique utilized by this instrument requires calibration with two samples of known thicknesses that are representative of the upper and lower thickness values to be measured. The accuracy of the instrument depends upon the calibration range, with a larger range giving a larger error. The measured thicknesses are typically within 2-3% of the calibration range (the difference between the thin and thick sample) of their actual values. In this paper the design, operational and performance characteristics of the instrument along with a detailed description of the thickness gauging algorithm used in the device are presented.
Kolmogorov-Smirnov test for spatially correlated data
Olea, R.A.; Pawlowsky-Glahn, V.
2009-01-01
The Kolmogorov-Smirnov test is a convenient method for investigating whether two underlying univariate probability distributions can be regarded as undistinguishable from each other or whether an underlying probability distribution differs from a hypothesized distribution. Application of the test requires that the sample be unbiased and the outcomes be independent and identically distributed, conditions that are violated in several degrees by spatially continuous attributes, such as topographical elevation. A generalized form of the bootstrap method is used here for the purpose of modeling the distribution of the statistic D of the Kolmogorov-Smirnov test. The innovation is in the resampling, which in the traditional formulation of bootstrap is done by drawing from the empirical sample with replacement presuming independence. The generalization consists of preparing resamplings with the same spatial correlation as the empirical sample. This is accomplished by reading the value of unconditional stochastic realizations at the sampling locations, realizations that are generated by simulated annealing. The new approach was tested by two empirical samples taken from an exhaustive sample closely following a lognormal distribution. One sample was a regular, unbiased sample while the other one was a clustered, preferential sample that had to be preprocessed. Our results show that the p-value for the spatially correlated case is always larger that the p-value of the statistic in the absence of spatial correlation, which is in agreement with the fact that the information content of an uncorrelated sample is larger than the one for a spatially correlated sample of the same size. ?? Springer-Verlag 2008.
An opportunity cost approach to sample size calculation in cost-effectiveness analysis.
Gafni, A; Walter, S D; Birch, S; Sendi, P
2008-01-01
The inclusion of economic evaluations as part of clinical trials has led to concerns about the adequacy of trial sample size to support such analysis. The analytical tool of cost-effectiveness analysis is the incremental cost-effectiveness ratio (ICER), which is compared with a threshold value (lambda) as a method to determine the efficiency of a health-care intervention. Accordingly, many of the methods suggested to calculating the sample size requirements for the economic component of clinical trials are based on the properties of the ICER. However, use of the ICER and a threshold value as a basis for determining efficiency has been shown to be inconsistent with the economic concept of opportunity cost. As a result, the validity of the ICER-based approaches to sample size calculations can be challenged. Alternative methods for determining improvements in efficiency have been presented in the literature that does not depend upon ICER values. In this paper, we develop an opportunity cost approach to calculating sample size for economic evaluations alongside clinical trials, and illustrate the approach using a numerical example. We compare the sample size requirement of the opportunity cost method with the ICER threshold method. In general, either method may yield the larger required sample size. However, the opportunity cost approach, although simple to use, has additional data requirements. We believe that the additional data requirements represent a small price to pay for being able to perform an analysis consistent with both concept of opportunity cost and the problem faced by decision makers. Copyright (c) 2007 John Wiley & Sons, Ltd.
Design and test of porous-tungsten mercury vaporizers
NASA Technical Reports Server (NTRS)
Kerslake, W. R.
1972-01-01
Future use of large size Kaufman thrusters and thruster arrays will impose new design requirements for porous plug type vaporizers. Larger flow rate coupled with smaller pores to prevent liquid intrusion will be desired. The results of testing samples of porous tungsten for flow rate, liquid intrusion pressure level, and mechanical strength are presented. Nitrogen gas was used in addition to mercury flow for approximate calibration. Liquid intrusion pressure levels will require that flight thruster systems with long feed lines have some way (a valve) to restrict dynamic line pressures during launch.
On the influence of crystal size and wavelength on native SAD phasing.
Liebschner, Dorothee; Yamada, Yusuke; Matsugaki, Naohiro; Senda, Miki; Senda, Toshiya
2016-06-01
Native SAD is an emerging phasing technique that uses the anomalous signal of native heavy atoms to obtain crystallographic phases. The method does not require specific sample preparation to add anomalous scatterers, as the light atoms contained in the native sample are used as marker atoms. The most abundant anomalous scatterer used for native SAD, which is present in almost all proteins, is sulfur. However, the absorption edge of sulfur is at low energy (2.472 keV = 5.016 Å), which makes it challenging to carry out native SAD phasing experiments as most synchrotron beamlines are optimized for shorter wavelength ranges where the anomalous signal of sulfur is weak; for longer wavelengths, which produce larger anomalous differences, the absorption of X-rays by the sample, solvent, loop and surrounding medium (e.g. air) increases tremendously. Therefore, a compromise has to be found between measuring strong anomalous signal and minimizing absorption. It was thus hypothesized that shorter wavelengths should be used for large crystals and longer wavelengths for small crystals, but no thorough experimental analyses have been reported to date. To study the influence of crystal size and wavelength, native SAD experiments were carried out at different wavelengths (1.9 and 2.7 Å with a helium cone; 3.0 and 3.3 Å with a helium chamber) using lysozyme and ferredoxin reductase crystals of various sizes. For the tested crystals, the results suggest that larger sample sizes do not have a detrimental effect on native SAD data and that long wavelengths give a clear advantage with small samples compared with short wavelengths. The resolution dependency of substructure determination was analyzed and showed that high-symmetry crystals with small unit cells require higher resolution for the successful placement of heavy atoms.
Mineralogy and petrology of comet 81P/wild 2 nucleus samples
Zolensky, M.E.; Zega, T.J.; Yano, H.; Wirick, S.; Westphal, A.J.; Weisberg, M.K.; Weber, I.; Warren, J.L.; Velbel, M.A.; Tsuchiyama, A.; Tsou, P.; Toppani, A.; Tomioka, N.; Tomeoka, K.; Teslich, N.; Taheri, M.; Susini, J.; Stroud, R.; Stephan, T.; Stadermann, F.J.; Snead, C.J.; Simon, S.B.; Simionovici, A.; See, T.H.; Robert, F.; Rietmeijer, F.J.M.; Rao, W.; Perronnet, M.C.; Papanastassiou, D.A.; Okudaira, K.; Ohsumi, K.; Ohnishi, I.; Nakamura-Messenger, K.; Nakamura, T.; Mostefaoui, S.; Mikouchi, T.; Meibom, A.; Matrajt, G.; Marcus, M.A.; Leroux, H.; Lemelle, L.; Le, L.; Lanzirotti, A.; Langenhorst, F.; Krot, A.N.; Keller, L.P.; Kearsley, A.T.; Joswiak, D.; Jacob, D.; Ishii, H.; Harvey, R.; Hagiya, K.; Grossman, L.; Grossman, J.H.; Graham, G.A.; Gounalle, M.; Gillet, P.; Genge, M.J.; Flynn, G.; Ferroir, T.; Fallon, S.; Ebel, D.S.; Dai, Z.R.; Cordier, P.; Clark, B.; Chi, M.; Butterworth, Anna L.; Brownlee, D.E.; Bridges, J.C.; Brennan, S.; Brearley, A.; Bradley, J.P.; Bleuet, P.; Bland, P.A.; Bastien, R.
2006-01-01
The bulk of the comet 81P/Wild 2 (hereafter Wild 2) samples returned to Earth by the Stardust spacecraft appear to be weakly constructed mixtures of nanometer-scale grains, with occasional much larger (over 1 micrometer) ferromagnesian silicates, Fe-Ni sulfides, Fe-Ni metal, and accessory phases. The very wide range of olivine and low-Ca pyroxene compositions in comet Wild 2 requires a wide range of formation conditions, probably reflecting very different formation locations in the protoplanetary disk. The restricted compositional ranges of Fe-Ni sulfides, the wide range for silicates, and the absence of hydrous phases indicate that comet Wild 2 experienced little or no aqueous alteration. Less abundant Wild 2 materials include a refractory particle, whose presence appears to require radial transport in the early protoplanetary disk.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Raymond H.; Truax, Ryan A.; Lankford, David A.
Solid-phase iron concentrations and generalized composite surface complexation models were used to evaluate procedures in determining uranium sorption on oxidized aquifer material at a proposed U in situ recovery (ISR) site. At the proposed Dewey Burdock ISR site in South Dakota, USA, oxidized aquifer material occurs downgradient of the U ore zones. Solid-phase Fe concentrations did not explain our batch sorption test results,though total extracted Fe appeared to be positively correlated with overall measured U sorption. Batch sorption test results were used to develop generalized composite surface complexation models that incorporated the full genericsorption potential of each sample, without detailedmore » mineralogiccharacterization. The resultant models provide U sorption parameters (site densities and equilibrium constants) for reactive transport modeling. The generalized composite surface complexation sorption models were calibrated to batch sorption data from three oxidized core samples using inverse modeling, and gave larger sorption parameters than just U sorption on the measured solidphase Fe. These larger sorption parameters can significantly influence reactive transport modeling, potentially increasing U attenuation. Because of the limited number of calibration points, inverse modeling required the reduction of estimated parameters by fixing two parameters. The best-fit models used fixed values for equilibrium constants, with the sorption site densities being estimated by the inversion process. While these inverse routines did provide best-fit sorption parameters, local minima and correlated parameters might require further evaluation. Despite our limited number of proxy samples, the procedures presented provide a valuable methodology to consider for sites where metal sorption parameters are required. Furthermore, these sorption parameters can be used in reactive transport modeling to assess downgradient metal attenuation, especially when no other calibration data are available, such as at proposed U ISR sites.« less
Johnson, Raymond H.; Truax, Ryan A.; Lankford, David A.; ...
2016-02-03
Solid-phase iron concentrations and generalized composite surface complexation models were used to evaluate procedures in determining uranium sorption on oxidized aquifer material at a proposed U in situ recovery (ISR) site. At the proposed Dewey Burdock ISR site in South Dakota, USA, oxidized aquifer material occurs downgradient of the U ore zones. Solid-phase Fe concentrations did not explain our batch sorption test results,though total extracted Fe appeared to be positively correlated with overall measured U sorption. Batch sorption test results were used to develop generalized composite surface complexation models that incorporated the full genericsorption potential of each sample, without detailedmore » mineralogiccharacterization. The resultant models provide U sorption parameters (site densities and equilibrium constants) for reactive transport modeling. The generalized composite surface complexation sorption models were calibrated to batch sorption data from three oxidized core samples using inverse modeling, and gave larger sorption parameters than just U sorption on the measured solidphase Fe. These larger sorption parameters can significantly influence reactive transport modeling, potentially increasing U attenuation. Because of the limited number of calibration points, inverse modeling required the reduction of estimated parameters by fixing two parameters. The best-fit models used fixed values for equilibrium constants, with the sorption site densities being estimated by the inversion process. While these inverse routines did provide best-fit sorption parameters, local minima and correlated parameters might require further evaluation. Despite our limited number of proxy samples, the procedures presented provide a valuable methodology to consider for sites where metal sorption parameters are required. Furthermore, these sorption parameters can be used in reactive transport modeling to assess downgradient metal attenuation, especially when no other calibration data are available, such as at proposed U ISR sites.« less
Extreme Quantum Memory Advantage for Rare-Event Sampling
NASA Astrophysics Data System (ADS)
Aghamohammadi, Cina; Loomis, Samuel P.; Mahoney, John R.; Crutchfield, James P.
2018-02-01
We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r , for any large real number r . Then, for a sequence of processes each labeled by an integer size N , we compare how the classical and quantum required memories scale with N . In this setting, since both memories can diverge as N →∞ , the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N →∞ , but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.
Berger, Cordula; Parson, Walther
2009-06-01
The degradation state of some biological traces recovered from the crime scene requires the amplification of very short fragments to attain a useful mitochondrial (mt)DNA sequence. We have previously introduced two mini-multiplex assays that amplify 10 overlapping control region (CR) fragments in two separate multiplex PCRs, which brought successful CR consensus sequences from even highly degraded DNA extracts. This procedure requires a total of 20 sequencing reactions per sample, which is laborious and cost intensive. For only moderately degraded samples that we encounter more frequently with typical mtDNA casework material, we developed two new multiplex assays that use a subset of the mini-amplicon primers but embrace larger fragments (midis) and require only 10 sequencing reactions to build a double-stranded CR consensus sequence. We used a preceding mtDNA quantitation step by real-time PCR with two different target fragments (143 and 283 bp) that roughly correspond to the average fragment sizes of the different multiplex approaches to estimate size-dependent mtDNA quantities and to aid the choice of the appropriate PCR multiplexes with respect to quality of the results and required costs.
Optimal tumor sampling for immunostaining of biomarkers in breast carcinoma
2011-01-01
Introduction Biomarkers, such as Estrogen Receptor, are used to determine therapy and prognosis in breast carcinoma. Immunostaining assays of biomarker expression have a high rate of inaccuracy; for example, estimates are as high as 20% for Estrogen Receptor. Biomarkers have been shown to be heterogeneously expressed in breast tumors and this heterogeneity may contribute to the inaccuracy of immunostaining assays. Currently, no evidence-based standards exist for the amount of tumor that must be sampled in order to correct for biomarker heterogeneity. The aim of this study was to determine the optimal number of 20X fields that are necessary to estimate a representative measurement of expression in a whole tissue section for selected biomarkers: ER, HER-2, AKT, ERK, S6K1, GAPDH, Cytokeratin, and MAP-Tau. Methods Two collections of whole tissue sections of breast carcinoma were immunostained for biomarkers. Expression was quantified using the Automated Quantitative Analysis (AQUA) method of quantitative immunofluorescence. Simulated sampling of various numbers of fields (ranging from one to thirty five) was performed for each marker. The optimal number was selected for each marker via resampling techniques and minimization of prediction error over an independent test set. Results The optimal number of 20X fields varied by biomarker, ranging between three to fourteen fields. More heterogeneous markers, such as MAP-Tau protein, required a larger sample of 20X fields to produce representative measurement. Conclusions The optimal number of 20X fields that must be sampled to produce a representative measurement of biomarker expression varies by marker with more heterogeneous markers requiring a larger number. The clinical implication of these findings is that breast biopsies consisting of a small number of fields may be inadequate to represent whole tumor biomarker expression for many markers. Additionally, for biomarkers newly introduced into clinical use, especially if therapeutic response is dictated by level of expression, the optimal size of tissue sample must be determined on a marker-by-marker basis. PMID:21592345
Triaxial testing of Lopez Fault gouge at 150 MPa mean effective stress
Scott, D.R.; Lockner, D.A.; Byerlee, J.D.; Sammis, C.G.
1994-01-01
Triaxial compression experiments were performed on samples of natural granular fault gouge from the Lopez Fault in Southern California. This material consists primarily of quartz and has a self-similar grain size distribution thought to result from natural cataclasis. The experiments were performed at a constant mean effective stress of 150 MPa, to expose the volumetric strains associated with shear failure. The failure strength is parameterized by the coefficient of internal friction ??, based on the Mohr-Coulomb failure criterion. Samples of remoulded Lopez gouge have internal friction ??=0.6??0.02. In experiments where the ends of the sample are constrained to remain axially aligned, suppressing strain localisation, the sample compacts before failure and dilates persistently after failure. In experiments where one end of the sample is free to move laterally, the strain localises to a single oblique fault at around the point of failure; some dilation occurs but does not persist. A comparison of these experiments suggests that dilation is confined to the region of shear localisation in a sample. Overconsolidated samples have slightly larger failure strengths than normally consolidated samples, and smaller axial strains are required to cause failure. A large amount of dilation occurs after failure in heavily overconsolidated samples, suggesting that dilation is occurring throughout the sample. Undisturbed samples of Lopez gouge, cored from the outcrop, have internal friction in the range ??=0.4-0.6; the upper end of this range corresponds to the value established for remoulded Lopez gouge. Some kind of natural heterogeneity within the undisturbed samples is probably responsible for their low, variable strength. In samples of simulated gouge, with a more uniform grain size, active cataclasis during axial loading leads to large amounts of compaction. Larger axial strains are required to cause failure in simulated gouge, but the failure strength is similar to that of natural Lopez gouge. Use of the Mohr-Coulomb failure criterion to interpret the results from this study, and other recent studies on intact rock and granular gouge, leads to values of ?? that depend on the loading configuration and the intact or granular state of the sample. Conceptual models are advanced to account for these descrepancies. The consequences for strain-weakening of natural faults are also discussed. ?? 1994 Birkha??user Verlag.
Accounting for Incomplete Species Detection in Fish Community Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
McManamay, Ryan A; Orth, Dr. Donald J; Jager, Yetta
2013-01-01
Riverine fish assemblages are heterogeneous and very difficult to characterize with a one-size-fits-all approach to sampling. Furthermore, detecting changes in fish assemblages over time requires accounting for variation in sampling designs. We present a modeling approach that permits heterogeneous sampling by accounting for site and sampling covariates (including method) in a model-based framework for estimation (versus a sampling-based framework). We snorkeled during three surveys and electrofished during a single survey in suite of delineated habitats stratified by reach types. We developed single-species occupancy models to determine covariates influencing patch occupancy and species detection probabilities whereas community occupancy models estimated speciesmore » richness in light of incomplete detections. For most species, information-theoretic criteria showed higher support for models that included patch size and reach as covariates of occupancy. In addition, models including patch size and sampling method as covariates of detection probabilities also had higher support. Detection probability estimates for snorkeling surveys were higher for larger non-benthic species whereas electrofishing was more effective at detecting smaller benthic species. The number of sites and sampling occasions required to accurately estimate occupancy varied among fish species. For rare benthic species, our results suggested that higher number of occasions, and especially the addition of electrofishing, may be required to improve detection probabilities and obtain accurate occupancy estimates. Community models suggested that richness was 41% higher than the number of species actually observed and the addition of an electrofishing survey increased estimated richness by 13%. These results can be useful to future fish assemblage monitoring efforts by informing sampling designs, such as site selection (e.g. stratifying based on patch size) and determining effort required (e.g. number of sites versus occasions).« less
NASA Technical Reports Server (NTRS)
Noever, David A.
2000-01-01
Resources studies for asteroidal mining evaluation have depended historically on remote sensing analysis for chemical elements. During the November 1998 Leonids meteor shower, a stratospheric balloon and various low-density capture media were used to sample fragments from Comet Tempel-Tuttle debris during a peak Earth crossing. The analysis not only demonstrates how potential sampling strategies may improve the projections for metals or rare elements in astromining, but also benchmarks materials during low temperature (-60 F), high dessication environments as seen during atmospheric exposure. The results indicate high aluminum, magnesium and iron content for various sampled particles recovered, but generalization to the sporadic meteors expected from asteroidal sources will require future improvements in larger sampling volumes before a broad-use strategy for chemical analysis can be described. A repeat of the experimental procedure is planned for the November 1999 Leonids' shower, and various improvements for atmospheric sampling will be discussed.
Davies, C
1997-01-01
The study aimed to explore nurses knowledge and attitudes towards brain stem death and organ donation. An ex post facto research design was used to determine relationships between variables. A 16 item questionnaire was used to collect data. Statistical analysis revealed one significant result. The limitations of the sample size is acknowledged and the conclusion suggests a larger study is required.
Overlay improvement methods with diffraction based overlay and integrated metrology
NASA Astrophysics Data System (ADS)
Nam, Young-Sun; Kim, Sunny; Shin, Ju Hee; Choi, Young Sin; Yun, Sang Ho; Kim, Young Hoon; Shin, Si Woo; Kong, Jeong Heung; Kang, Young Seog; Ha, Hun Hwan
2015-03-01
To accord with new requirement of securing more overlay margin, not only the optical overlay measurement is faced with the technical limitations to represent cell pattern's behavior, but also the larger measurement samples are inevitable for minimizing statistical errors and better estimation of circumstance in a lot. From these reasons, diffraction based overlay (DBO) and integrated metrology (IM) were mainly proposed as new approaches for overlay enhancement in this paper.
Nutrition labeling and value size pricing at fast-food restaurants: a consumer perspective.
O'Dougherty, Maureen; Harnack, Lisa J; French, Simone A; Story, Mary; Oakes, J Michael; Jeffery, Robert W
2006-01-01
This pilot study examined nutrition-related attitudes that may affect food choices at fast-food restaurants, including consumer attitudes toward nutrition labeling of fast foods and elimination of value size pricing. A convenience sample of 79 fast-food restaurant patrons aged 16 and above (78.5% white, 55% female, mean age 41.2 [17.1]) selected meals from fast-food restaurant menus that varied as to whether nutrition information was provided and value pricing included and completed a survey and interview on nutrition-related attitudes. Only 57.9% of participants rated nutrition as important when buying fast food. Almost two thirds (62%) supported a law requiring nutrition labeling on restaurant menus. One third (34%) supported a law requiring restaurants to offer lower prices on smaller instead of bigger-sized portions. This convenience sample of fast-food patrons supported nutrition labels on menus. More research is needed with larger samples on whether point-of-purchase nutrition labeling at fast-food restaurants raises perceived importance of nutrition when eating out.
Page, G P; Amos, C I; Boerwinkle, E
1998-04-01
We present a test statistic, the quantitative LOD (QLOD) score, for the testing of both linkage and exclusion of quantitative-trait loci in randomly selected human sibships. As with the traditional LOD score, the boundary values of 3, for linkage, and -2, for exclusion, can be used for the QLOD score. We investigated the sample sizes required for inferring exclusion and linkage, for various combinations of linked genetic variance, total heritability, recombination distance, and sibship size, using fixed-size sampling. The sample sizes required for both linkage and exclusion were not qualitatively different and depended on the percentage of variance being linked or excluded and on the total genetic variance. Information regarding linkage and exclusion in sibships larger than size 2 increased as approximately all possible pairs n(n-1)/2 up to sibships of size 6. Increasing the recombination (theta) distance between the marker and the trait loci reduced empirically the power for both linkage and exclusion, as a function of approximately (1-2theta)4.
A Naturalistic Study of Driving Behavior in Older Adults and Preclinical Alzheimer Disease.
Babulal, Ganesh M; Stout, Sarah H; Benzinger, Tammie L S; Ott, Brian R; Carr, David B; Webb, Mollie; Traub, Cindy M; Addison, Aaron; Morris, John C; Warren, David K; Roe, Catherine M
2017-01-01
A clinical consequence of symptomatic Alzheimer's disease (AD) is impaired driving performance. However, decline in driving performance may begin in the preclinical stage of AD. We used a naturalistic driving methodology to examine differences in driving behavior over one year in a small sample of cognitively normal older adults with ( n = 10) and without ( n = 10) preclinical AD. As expected with a small sample size, there were no statistically significant differences between the two groups, but older adults with preclinical AD drove less often, were less likely to drive at night, and had fewer aggressive behaviors such as hard braking, speeding, and sudden acceleration. The sample size required to power a larger study to determine differences was calculated.
Preliminary Study of the Autism Self-Efficacy Scale for Teachers (ASSET).
Ruble, Lisa A; Toland, Michael D; Birdwhistell, Jessica L; McGrew, John H; Usher, Ellen L
2013-09-01
The purpose of the current study was to evaluate a new measure, the Autism Self-Efficacy Scale for Teachers (ASSET) for its dimensionality, internal consistency, and construct validity derived in a sample of special education teachers ( N = 44) of students with autism. Results indicate that all items reflect one dominant factor, teachers' responses to items were internally consistent within the sample, and compared to a 100-point scale, a 6-point response scale is adequate. ASSET scores were found to be negatively correlated with scores on two subscale measures of teacher stress (i.e., self-doubt/need for support and disruption of the teaching process) but uncorrelated with teacher burnout scores. The ASSET is a promising tool that requires replication with larger samples.
NASA Astrophysics Data System (ADS)
Zhang, Zhiming; Huang, Ying; Bridgelall, Raj; Palek, Leonard; Strommen, Robert
2015-06-01
Weigh-in-motion (WIM) measurement has been widely used for weight enforcement, pavement design, freight management, and intelligent transportation systems to monitor traffic in real-time. However, to use such sensors effectively, vehicles must exit the traffic stream and slow down to match their current capabilities. Hence, agencies need devices with higher vehicle passing speed capabilities to enable continuous weight measurements at mainline speeds. The current practices for data acquisition at such high speeds are fragmented. Deployment configurations and settings depend mainly on the experiences of operation engineers. To assure adequate data, most practitioners use very high frequency measurements that result in redundant samples, thereby diminishing the potential for real-time processing. The larger data memory requirements from higher sample rates also increase storage and processing costs. The field lacks a sampling design or standard to guide appropriate data acquisition of high-speed WIM measurements. This study develops the appropriate sample rate requirements as a function of the vehicle speed. Simulations and field experiments validate the methods developed. The results will serve as guidelines for future high-speed WIM measurements using in-pavement strain-based sensors.
NASA Astrophysics Data System (ADS)
Beckmann, Felix
2016-10-01
The Helmholtz-Zentrum Geesthacht, Germany, is operating the user experiments for microtomography at the beamlines P05 and P07 using synchrotron radiation produced in the storage ring PETRA III at DESY, Hamburg, Germany. In recent years the software pipeline, sample changing hardware for performing high throughput experiments were developed. In this talk the current status of the beamlines will be given. Furthermore, optimisation and automatisation of scanning techniques, will be presented. These are required to scan samples which are larger than the field of view defined by the X-ray beam. The integration into an optimized reconstruction pipeline will be shown.
Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats
Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.
2012-01-01
This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.
Analysis of Darwin Rainfall Data: Implications on Sampling Strategy
NASA Technical Reports Server (NTRS)
Rafael, Qihang Li; Bras, Rafael L.; Veneziano, Daniele
1996-01-01
Rainfall data collected by radar in the vicinity of Darwin, Australia, have been analyzed in terms of their mean, variance, autocorrelation of area-averaged rain rate, and diurnal variation. It is found that, when compared with the well-studied GATE (Global Atmospheric Research Program Atlantic Tropical Experiment) data, Darwin rainfall has larger coefficient of variation (CV), faster reduction of CV with increasing area size, weaker temporal correlation, and a strong diurnal cycle and intermittence. The coefficient of variation for Darwin rainfall has larger magnitude and exhibits larger spatial variability over the sea portion than over the land portion within the area of radar coverage. Stationary, and nonstationary models have been used to study the sampling errors associated with space-based rainfall measurement. The nonstationary model shows that the sampling error is sensitive to the starting sampling time for some sampling frequencies, due to the diurnal cycle of rain, but not for others. Sampling experiments using data also show such sensitivity. When the errors are averaged over starting time, the results of the experiments and the stationary and nonstationary models match each other very closely. In the small areas for which data are available for I>oth Darwin and GATE, the sampling error is expected to be larger for Darwin due to its larger CV.
Auffan, Mélanie; Rose, Jérôme; Bottero, Jean-Yves; Lowry, Gregory V; Jolivet, Jean-Pierre; Wiesner, Mark R
2009-10-01
The regulation of engineered nanoparticles requires a widely agreed definition of such particles. Nanoparticles are routinely defined as particles with sizes between about 1 and 100 nm that show properties that are not found in bulk samples of the same material. Here we argue that evidence for novel size-dependent properties alone, rather than particle size, should be the primary criterion in any definition of nanoparticles when making decisions about their regulation for environmental, health and safety reasons. We review the size-dependent properties of a variety of inorganic nanoparticles and find that particles larger than about 30 nm do not in general show properties that would require regulatory scrutiny beyond that required for their bulk counterparts.
Metals associated with stormwater-relevant brake and tire samples
McKenzie, Erica R.; Money, Jon E.; Green, Peter G.; Young, Thomas M.
2009-01-01
Properly apportioning the loads of metals in highway stormwater runoff to the appropriate sources requires accurate data on source composition, especially regarding constituents that help to distinguish among sources. Representative tire and brake samples were collected from privately owned vehicles and aqueous extracts were analyzed for twenty-eight elements. Correlation principal components analysis (PCA) revealed that tires were most influenced by Zn, Pb, and Cu, while brakes were best characterized by Na and Fe followed by Ba, Cu, Mg, Mn, and K; the latter three may be due to roadside soil contributions. Notably elevated Cd contributions were found in several brake samples. A targeted Cd-plated brake rotor was sampled, producing results consistent with the elevated levels found in the larger sample population. This enriched source of Cd is of particular concern due to high toxicity of Cd in aquatic ecosystems. PMID:19709720
Design of clinical trials of antidepressants: should a placebo control arm be included?
Fritze, J; Möller, H J
2001-01-01
There is no doubt that available antidepressants are efficacious and effective. Nevertheless, more effective drugs with improved tolerability are needed. With this need in mind, some protagonists claim that future antidepressants should be proved superior to, or at least as effective as, established antidepressants, making placebo control methodologically dispensable in clinical trials. Moreover, the use of placebo control is criticised as unethical because it might result in effective treatment being withheld. There are, however, a number of methodological reasons why placebo control is indispensable for the proof of efficacy of antidepressants. Comparing investigational antidepressants only with standard antidepressants and not placebo yields ambiguous results that are difficult to interpret, be it in superiority or equivalence testing, and this method of assessment requires larger sample sizes than those required with the use of placebo control. Experimental methodology not adhering to the optimal study design is ethically questionable. Restricting the testing of investigational antidepressants only to superiority over standard antidepressants is an obstacle to therapeutic progress in terms of tolerability and the detection of innovative mechanisms of action from which certain subgroups of future patients might benefit. The use of a methodology that requires larger samples for testing of superiority or equivalence is also ethically questionable. In view of the high placebo response rates in trials of antidepressants, placebo treatment does not mean withholding effective treatment. Accepting the necessity of the clinical evaluation of new, potentially ineffective antidepressants implicitly means accepting placebo control as ethically justified. Three- or multi-arm comparisons including placebo and an active reference represent the optimal study design.
Measurement of in-plane thermal conductivity in polymer films
NASA Astrophysics Data System (ADS)
Wei, Qingshuo; Uehara, Chinatsu; Mukaida, Masakazu; Kirihara, Kazuhiro; Ishida, Takao
2016-04-01
Measuring the in-plane thermal conductivity of organic thermoelectric materials is challenging but is critically important. Here, a method to study the in-plane thermal conductivity of free-standing films (via the use of commercial equipment) based on temperature wave analysis is explored in depth. This subject method required a free-standing thin film with a thickness larger than 10 μm and an area larger than 1 cm2, which are not difficult to obtain for most solution-processable organic thermoelectric materials. We evaluated thermal conductivities and anisotropic ratios for various types of samples including insulating polymers, undoped semiconducting polymers, doped conducting polymers, and one-dimensional carbon fiber bulky papers. This approach facilitated a rapid screening of in-plane thermal conductivities for various organic thermoelectric materials.
Ascending Aortic Dimensions in Former National Football League Athletes.
Gentry, James L; Carruthers, David; Joshi, Parag H; Maroules, Christopher D; Ayers, Colby R; de Lemos, James A; Aagaard, Philip; Hachamovitch, Rory; Desai, Milind Y; Roselli, Eric E; Dunn, Reginald E; Alexander, Kezia; Lincoln, Andrew E; Tucker, Andrew M; Phelan, Dermot M
2017-11-01
Ascending aortic dimensions are slightly larger in young competitive athletes compared with sedentary controls, but rarely >40 mm. Whether this finding translates to aortic enlargement in older, former athletes is unknown. This cross-sectional study involved a sample of 206 former National Football League (NFL) athletes compared with 759 male subjects from the DHS-2 (Dallas Heart Study-2; mean age of 57.1 and 53.6 years, respectively, P <0.0001; body surface area of 2.4 and 2.1 m 2 , respectively, P <0.0001). Midascending aortic dimensions were obtained from computed tomographic scans performed as part of a NFL screening protocol or as part of the DHS. Compared with a population-based control group, former NFL athletes had significantly larger ascending aortic diameters (38±5 versus 34±4 mm; P <0.0001). A significantly higher proportion of former NFL athletes had an aorta of >40 mm (29.6% versus 8.6%; P <0.0001). After adjusting for age, race, body surface area, systolic blood pressure, history of hypertension, current smoking, diabetes mellitus, and lipid profile, the former NFL athletes still had significantly larger ascending aortas ( P <0.0001). Former NFL athletes were twice as likely to have an aorta >40 mm after adjusting for the same parameters. Ascending aortic dimensions were significantly larger in a sample of former NFL athletes after adjusting for their size, age, race, and cardiac risk factors. Whether this translates to an increased risk is unknown and requires further evaluation. © 2017 American Heart Association, Inc.
A new, simple electrostatic-acoustic hybrid levitator
NASA Technical Reports Server (NTRS)
Lierke, E. G.; Loeb, H.; Gross, D.
1990-01-01
Battelle has developed a hybrid levitator by combining the known single-axis acoustic standing wave levitator with a coaxial DC electric field. The resulting Coulomb forces on the charged liquid or solid sample support its weight and, together with the acoustic force, center the sample. Liquid samples with volumes approximately less than 100 micro-liters are deployed from a syringe reservoir into the acoustic pressure node. The sample is charged using a miniature high voltage power supply (approximately less than 20 kV) connected to the syringe needle. As the electric field, generated by a second miniature power supply, is increased, the acoustic intensity is reduced. The combination of both fields allows stable levitation of samples larger than either single technique could position on the ground. Decreasing the acoustic intensity reduces acoustic convection and sample deformation. Neither the electrostatic nor the acoustic field requires sample position sensing or active control. The levitator, now used for static and dynamic fluid physics investigations on the ground, can be easily modified for space operations.
Sampling procedures for throughfall monitoring: A simulation study
NASA Astrophysics Data System (ADS)
Zimmermann, Beate; Zimmermann, Alexander; Lark, Richard Murray; Elsenbeer, Helmut
2010-01-01
What is the most appropriate sampling scheme to estimate event-based average throughfall? A satisfactory answer to this seemingly simple question has yet to be found, a failure which we attribute to previous efforts' dependence on empirical studies. Here we try to answer this question by simulating stochastic throughfall fields based on parameters for statistical models of large monitoring data sets. We subsequently sampled these fields with different sampling designs and variable sample supports. We evaluated the performance of a particular sampling scheme with respect to the uncertainty of possible estimated means of throughfall volumes. Even for a relative error limit of 20%, an impractically large number of small, funnel-type collectors would be required to estimate mean throughfall, particularly for small events. While stratification of the target area is not superior to simple random sampling, cluster random sampling involves the risk of being less efficient. A larger sample support, e.g., the use of trough-type collectors, considerably reduces the necessary sample sizes and eliminates the sensitivity of the mean to outliers. Since the gain in time associated with the manual handling of troughs versus funnels depends on the local precipitation regime, the employment of automatically recording clusters of long troughs emerges as the most promising sampling scheme. Even so, a relative error of less than 5% appears out of reach for throughfall under heterogeneous canopies. We therefore suspect a considerable uncertainty of input parameters for interception models derived from measured throughfall, in particular, for those requiring data of small throughfall events.
Rodriguez, Estrella Sanz; Poynter, Sam; Curran, Mark; Haddad, Paul R; Shellie, Robert A; Nesterenko, Pavel N; Paull, Brett
2015-08-28
Preservation of ionic species within Antarctic ice yields a unique proxy record of the Earth's climate history. Studies have been focused until now on two proxies: the ionic components of sea salt aerosol and methanesulfonic acid. Measurement of the all of the major ionic species in ice core samples is typically carried out by ion chromatography. Former methods, whilst providing suitable detection limits, have been based upon off-column preconcentration techniques, requiring larger sample volumes, with potential for sample contamination and/or carryover. Here, a new capillary ion chromatography based analytical method has been developed for quantitative analysis of limited volume Antarctic ice core samples. The developed analytical protocol applies capillary ion chromatography (with suppressed conductivity detection) and direct on-column sample injection and focusing, thus eliminating the requirement for off-column sample preconcentration. This limits the total sample volume needed to 300μL per analysis, allowing for triplicate sample analysis with <1mL of sample. This new approach provides a reliable and robust analytical method for the simultaneous determination of organic and inorganic anions, including fluoride, methanesulfonate, chloride, sulfate and nitrate anions. Application to composite ice-core samples is demonstrated, with coupling of the capillary ion chromatograph to high resolution mass spectrometry used to confirm the presence and purity of the observed methanesulfonate peak. Copyright © 2015 Elsevier B.V. All rights reserved.
Evaluation of Aspergillus PCR protocols for testing serum specimens.
White, P Lewis; Mengoli, Carlo; Bretagne, Stéphane; Cuenca-Estrella, Manuel; Finnstrom, Niklas; Klingspor, Lena; Melchers, Willem J G; McCulloch, Elaine; Barnes, Rosemary A; Donnelly, J Peter; Loeffler, Juergen
2011-11-01
A panel of human serum samples spiked with various amounts of Aspergillus fumigatus genomic DNA was distributed to 23 centers within the European Aspergillus PCR Initiative to determine analytical performance of PCR. Information regarding specific methodological components and PCR performance was requested. The information provided was made anonymous, and meta-regression analysis was performed to determine any procedural factors that significantly altered PCR performance. Ninety-seven percent of protocols were able to detect a threshold of 10 genomes/ml on at least one occasion, with 83% of protocols reproducibly detecting this concentration. Sensitivity and specificity were 86.1% and 93.6%, respectively. Positive associations between sensitivity and the use of larger sample volumes, an internal control PCR, and PCR targeting the internal transcribed spacer (ITS) region were shown. Negative associations between sensitivity and the use of larger elution volumes (≥100 μl) and PCR targeting the mitochondrial genes were demonstrated. Most Aspergillus PCR protocols used to test serum generate satisfactory analytical performance. Testing serum requires less standardization, and the specific recommendations shown in this article will only improve performance.
Evaluation of Aspergillus PCR Protocols for Testing Serum Specimens▿†
White, P. Lewis; Mengoli, Carlo; Bretagne, Stéphane; Cuenca-Estrella, Manuel; Finnstrom, Niklas; Klingspor, Lena; Melchers, Willem J. G.; McCulloch, Elaine; Barnes, Rosemary A.; Donnelly, J. Peter; Loeffler, Juergen
2011-01-01
A panel of human serum samples spiked with various amounts of Aspergillus fumigatus genomic DNA was distributed to 23 centers within the European Aspergillus PCR Initiative to determine analytical performance of PCR. Information regarding specific methodological components and PCR performance was requested. The information provided was made anonymous, and meta-regression analysis was performed to determine any procedural factors that significantly altered PCR performance. Ninety-seven percent of protocols were able to detect a threshold of 10 genomes/ml on at least one occasion, with 83% of protocols reproducibly detecting this concentration. Sensitivity and specificity were 86.1% and 93.6%, respectively. Positive associations between sensitivity and the use of larger sample volumes, an internal control PCR, and PCR targeting the internal transcribed spacer (ITS) region were shown. Negative associations between sensitivity and the use of larger elution volumes (≥100 μl) and PCR targeting the mitochondrial genes were demonstrated. Most Aspergillus PCR protocols used to test serum generate satisfactory analytical performance. Testing serum requires less standardization, and the specific recommendations shown in this article will only improve performance. PMID:21940479
Ground-water quality of the southern High Plains aquifer, Texas and New Mexico, 2001
Fahlquist, Lynne
2003-01-01
In 2001, the U.S. Geological Survey National Water-Quality Assessment Program collected water samples from 48 wells in the southern High Plains as part of a larger scientific effort to broadly characterize and understand factors affecting water quality of the High Plains aquifer across the entire High Plains. Water samples were collected primarily from domestic wells in Texas and eastern New Mexico. Depths of wells sampled ranged from 100 to 500 feet, with a median depth of 201 feet. Depths to water ranged from 34 to 445 feet below land surface, with a median depth of 134 feet. Of 240 properties or constituents measured or analyzed, 10 exceeded U.S. Environmental Protection Agency public drinking-water standards or guidelines in one or more samples - arsenic, boron, chloride, dissolved solids, fluoride, manganese, nitrate, radon, strontium, and sulfate. Measured dissolved solids concentrations in 29 samples were larger than the public drinking-water guideline of 500 milligrams per liter. Fluoride concentrations in 16 samples, mostly in the southern part of the study area, were larger than the public drinking-water standard of 4 milligrams per liter. Nitrate was detected in all samples, and concentrations in six samples were larger than the public drinking-water standard of 10 milligrams per liter. Arsenic concentrations in 14 samples in the southern part of the study area were larger than the new (2002) public drinking-water standard of 10 micrograms per liter. Radon concentrations in 36 samples were larger than a proposed public drinking-water standard of 300 picocuries per liter. Pesticides were detected at very small concentrations, less than 1 microgram per liter, in less than 20 percent of the samples. The most frequently detected compounds were atrazine and breakdown products of atrazine, a finding similar to those of National Water-Quality Assessment aquifer studies across the Nation. Four volatile organic compounds were detected at small concentrations in six water samples. About 70 percent of the 48 primarily domestic wells sampled contained some fraction of recently (less than about 50 years ago) recharged ground water, as indicated by the presence of one or more pesticides, or tritium or nitrate concentrations greater than threshold levels.
Preliminary Study of the Autism Self-Efficacy Scale for Teachers (ASSET)
Ruble, Lisa A.; Toland, Michael D.; Birdwhistell, Jessica L.; McGrew, John H.; Usher, Ellen L.
2013-01-01
The purpose of the current study was to evaluate a new measure, the Autism Self-Efficacy Scale for Teachers (ASSET) for its dimensionality, internal consistency, and construct validity derived in a sample of special education teachers (N = 44) of students with autism. Results indicate that all items reflect one dominant factor, teachers’ responses to items were internally consistent within the sample, and compared to a 100-point scale, a 6-point response scale is adequate. ASSET scores were found to be negatively correlated with scores on two subscale measures of teacher stress (i.e., self-doubt/need for support and disruption of the teaching process) but uncorrelated with teacher burnout scores. The ASSET is a promising tool that requires replication with larger samples. PMID:23976899
PRECISE TULLY-FISHER RELATIONS WITHOUT GALAXY INCLINATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Obreschkow, D.; Meyer, M.
2013-11-10
Power-law relations between tracers of baryonic mass and rotational velocities of disk galaxies, so-called Tully-Fisher relations (TFRs), offer a wealth of applications in galaxy evolution and cosmology. However, measurements of rotational velocities require galaxy inclinations, which are difficult to measure, thus limiting the range of TFR studies. This work introduces a maximum likelihood estimation (MLE) method for recovering the TFR in galaxy samples with limited or no information on inclinations. The robustness and accuracy of this method is demonstrated using virtual and real galaxy samples. Intriguingly, the MLE reliably recovers the TFR of all test samples, even without using anymore » inclination measurements—that is, assuming a random sin i-distribution for galaxy inclinations. Explicitly, this 'inclination-free MLE' recovers the three TFR parameters (zero-point, slope, scatter) with statistical errors only about 1.5 times larger than the best estimates based on perfectly known galaxy inclinations with zero uncertainty. Thus, given realistic uncertainties, the inclination-free MLE is highly competitive. If inclination measurements have mean errors larger than 10°, it is better not to use any inclinations than to consider the inclination measurements to be exact. The inclination-free MLE opens interesting perspectives for future H I surveys by the Square Kilometer Array and its pathfinders.« less
NASA Astrophysics Data System (ADS)
Dang, Nhan C.; Ciezak-Jenkins, Jennifer A.
2018-04-01
In this work, the dependence of the morphology and stability of the extended solid of carbon monoxide (CO) is correlated to the rate of transformation from the molecular CO to extended solid of CO using optical imaging, photoluminescence, Raman spectroscopy, and X-ray diffraction. The analyses show the rate and pressure of the transformation to be strongly controlled by catalytic effects, both chemical and optical. In a larger volume per reaction area, the transformation was found to require either a longer time at an elevated pressure or a higher pressure compared to a sample synthesized in a smaller volume per reaction area, leading to the conclusion that the transformation rate is slower for a sample in a larger volume per reaction area. A faster rate of transformation was also noted when the reaction area of a CO sample was catalyzed with H2SO4. Through variation of the volume per reaction area, pressure or the addition of catalysts, it was possible to control the rate of the phase transition and therefore the morphology. In general, the extended solid of CO synthesized with a faster rate showed a more ordered structure and increased metastability relative to the material formed with a slower compression rate.
NASA Astrophysics Data System (ADS)
Bitner, Rex M.; Koller, Susan C.
2004-06-01
Three different methods of automated high throughput purification of genomic DNA from plant materials processed in 96 well plates are described. One method uses MagneSil paramagnetic particles to purify DNA present in single leaf punch samples or small seed samples, using 320ul capacity 96 well plates which minimizes reagent and plate costs. A second method uses 2.2 ml and 1.2 ml capacity plates and allows the purification of larger amounts of DNA from 5-6 punches of materials or larger amounts of seeds. The third method uses the MagneSil ONE purification system to purify a fixed amount of DNA, thus simplifying the processing of downstream applications by normalizing the amounts of DNA so they do not require quantitation. Protocols for the purification of a fixed yield of DNA, e.g. 1 ug, from plant leaf or seed samples using MagneSil paramagnetic particles and a Beckman-Coulter BioMek FX robot are described. DNA from all three methods is suitable for applications such as PCR, RAPD, STR, READIT SNP analysis, and multiplexed PCR systems. The MagneSil ONE system is also suitable for use with SNP detection systems such as Third Wave Technology"s Invader methods.
Gas-driven pump for ground-water samples
Signor, Donald C.
1978-01-01
Observation wells installed for artificial-recharge research and other wells used in different ground-water programs are frequently cased with small-diameter steel pipe. To obtain samples from these small-diameter wells in order to monitor water quality, and to calibrate solute-transport models, a small-diameter pump with unique operating characteristics is required that causes a minimum alternation of samples during field sampling. A small-diameter gas-driven pump was designed and built to obtain water samples from wells of two-inch diameter or larger. The pump is a double-piston type with the following characteristics: (1) The water sample is isolated from the operating gas, (2) no source of electricity is ncessary, (3) operation is continuous, (4) use of compressed gas is efficient, and (5) operation is reliable over extended periods of time. Principles of operation, actual operation techniques, gas-use analyses and operating experience are described. Complete working drawings and a component list are included. Recent modifications and pump construction for high-pressure applications also are described. (Woodard-USGS)
NASA Astrophysics Data System (ADS)
Ryu, Inkeon; Kim, Daekeun
2018-04-01
A typical selective plane illumination microscopy (SPIM) image size is basically limited by the field of view, which is a characteristic of the objective lens. If an image larger than the imaging area of the sample is to be obtained, image stitching, which combines step-scanned images into a single panoramic image, is required. However, accurately registering the step-scanned images is very difficult because the SPIM system uses a customized sample mount where uncertainties for the translational and the rotational motions exist. In this paper, an image registration technique based on multiple fluorescent microsphere tracking is proposed in the view of quantifying the constellations and measuring the distances between at least two fluorescent microspheres embedded in the sample. Image stitching results are demonstrated for optically cleared large tissue with various staining methods. Compensation for the effect of the sample rotation that occurs during the translational motion in the sample mount is also discussed.
Wider-Opening Dewar Flasks for Cryogenic Storage
NASA Technical Reports Server (NTRS)
Ruemmele, Warren P.; Manry, John; Stafford, Kristin; Bue, Grant; Krejci, John; Evernden, Bent
2010-01-01
Dewar flasks have been proposed as containers for relatively long-term (25 days) storage of perishable scientific samples or other perishable objects at a temperature of 175 C. The refrigeration would be maintained through slow boiling of liquid nitrogen (LN2). For the purposes of the application for which these containers were proposed, (1) the neck openings of commercial off-the-shelf (COTS) Dewar flasks are too small for most NASA samples; (2) the round shapes of the COTS containers give rise to unacceptably low efficiency of packing in rectangular cargo compartments; and (3) the COTS containers include metal structures that are too thermally conductive, such that they cannot, without exceeding size and weight limits, hold enough LN2 for the required long-term-storage. In comparison with COTS Dewar flasks, the proposed containers would be rectangular, yet would satisfy the long-term storage requirement without exceeding size and weight limits; would have larger neck openings; and would have greater sample volumes, leading to a packing efficiency of about double the sample volume as a fraction of total volume. The proposed containers would be made partly of aerospace- type composite materials and would include vacuum walls, multilayer insulation, and aerogel insulation.
A Ground Truthing Method for AVIRIS Overflights Using Canopy Absorption Spectra
NASA Technical Reports Server (NTRS)
Gamon, John A.; Serrano, Lydia; Roberts, Dar A.; Ustin, Susan L.
1996-01-01
Remote sensing for ecological field studies requires ground truthing for accurate interpretation of remote imagery. However, traditional vegetation sampling methods are time consuming and hard to relate to the scale of an AVIRIS scene. The large errors associated with manual field sampling, the contrasting formats of remote and ground data, and problems with coregistration of field sites with AVIRIS pixels can lead to difficulties in interpreting AVIRIS data. As part of a larger study of fire risk in the Santa Monica Mountains of southern California, we explored a ground-based optical method of sampling vegetation using spectrometers mounted both above and below vegetation canopies. The goal was to use optical methods to provide a rapid, consistent, and objective means of "ground truthing" that could be related both to AVIRIS imagery and to conventional ground sampling (e.g., plot harvests and pigment assays).
NASA Astrophysics Data System (ADS)
Briguglio, Antonino; Goeting, Sulia; Kusli, Rosnani; Roslim, Amajida; Polgar, Gianluca; Kocsis, Laszlo
2016-04-01
For this study, 11 samples have been collected by scuba diving from 5 to 35 meters water depth off shore Brunei Darussalam. The locations sampled are known as: Pelong Rock (5 samples, shallow reef with soft and stony corals and larger foraminifera, 5 to 8 meters water depth), Abana Rock (1 sample, shallow reef with mainly soft corals and larger foraminifera, 13 to 18 meters water depth), Oil Rig wreck (1 sample, very sandy bottom with larger foraminifera, 18 meters water depth), Dolphin wreck (1 sample, muddy sand with many small rotaliids, 24 meters water depth), US wreck, (1 sample, sand with small clay fraction, 28 meters water depth), Australian wreck (1 sample, mainly medium to coarse sand with larger foraminifera, 34 meters water depth) and Blue water wreck (1 sample, mainly coarse sand, coral rubble and larger foraminifera, 35 meters water depth). Those samples closer to the river inputs are normally richer in clay, while the most distant samples are purely sandy. Some additional samples have been collected next to reef environments which, even if very shallow, are mainly sandy with almost no clay fraction. The deepest sample, which is 30 km offshore, contains some planktonic foraminifera and is characterized by a large range of preservations concerning foraminifera, thus testifying the presence or relict sediments at the sea bottom. The presence of relict sediments was already pointed out by older oil-related field studies offshore Brunei Darussalam, and now it is possible to draw the depth limit of these deposits. The diversity of the benthic foraminiferal fauna is relatively high but not as higher as neighboring regions as some studies have highlighted. The species collected and identified are more than 50: in reef environment the most abundant are Calcarina defrancii, Neorotalia calcar and the amphisteginidae; deeper in the muddy sediments the most abundant is Pararotalia schroeteriana and in the deepest sandy sample the most abundant are Calcarina hispida, followed by Operculina ammonoides.
Linking models and data on vegetation structure
NASA Astrophysics Data System (ADS)
Hurtt, G. C.; Fisk, J.; Thomas, R. Q.; Dubayah, R.; Moorcroft, P. R.; Shugart, H. H.
2010-06-01
For more than a century, scientists have recognized the importance of vegetation structure in understanding forest dynamics. Now future satellite missions such as Deformation, Ecosystem Structure, and Dynamics of Ice (DESDynI) hold the potential to provide unprecedented global data on vegetation structure needed to reduce uncertainties in terrestrial carbon dynamics. Here, we briefly review the uses of data on vegetation structure in ecosystem models, develop and analyze theoretical models to quantify model-data requirements, and describe recent progress using a mechanistic modeling approach utilizing a formal scaling method and data on vegetation structure to improve model predictions. Generally, both limited sampling and coarse resolution averaging lead to model initialization error, which in turn is propagated in subsequent model prediction uncertainty and error. In cases with representative sampling, sufficient resolution, and linear dynamics, errors in initialization tend to compensate at larger spatial scales. However, with inadequate sampling, overly coarse resolution data or models, and nonlinear dynamics, errors in initialization lead to prediction error. A robust model-data framework will require both models and data on vegetation structure sufficient to resolve important environmental gradients and tree-level heterogeneity in forest structure globally.
A New Model of Size-graded Soil Veneer on the Lunar Surface
NASA Technical Reports Server (NTRS)
Basu, Abhijit; McKay, David S.
2005-01-01
Introduction. We propose a new model of distribution of submillimeter sized lunar soil grains on the lunar surface. We propose that in the uppermost millimeter or two of the lunar surface, soil-grains are size graded with the finest nanoscale dust on top and larger micron-scale particles below. This standard state is perturbed by ejecta deposition of larger grains at the lunar surface, which have a coating of dusty layer that may not have substrates of intermediate sizes. Distribution of solar wind elements (SWE), agglutinates, vapor deposited nanophase Fe0 in size fractions of lunar soils and ir spectra of size fractions of lunar soils are compatible with this model. A direct test of this model requires bringing back glue-impregnated tubes of lunar soil samples to be dissected and examined on Earth.
Age-related differences in reaction time task performance in young children.
Kiselev, Sergey; Espy, Kimberly Andrews; Sheffield, Tiffany
2009-02-01
Performance of reaction time (RT) tasks was investigated in young children and adults to test the hypothesis that age-related differences in processing speed supersede a "global" mechanism and are a function of specific differences in task demands and processing requirements. The sample consisted of 54 4-year-olds, 53 5-year-olds, 59 6-year-olds, and 35 adults from Russia. Using the regression approach pioneered by Brinley and the transformation method proposed by Madden and colleagues and Ridderinkhoff and van der Molen, age-related differences in processing speed differed among RT tasks with varying demands. In particular, RTs differed between children and adults on tasks that required response suppression, discrimination of color or spatial orientation, reversal of contingencies of previously learned stimulus-response rules, and greater stimulus-response complexity. Relative costs of these RT task differences were larger than predicted by the global difference hypothesis except for response suppression. Among young children, age-related differences larger than predicted by the global difference hypothesis were evident when tasks required color or spatial orientation discrimination and stimulus-response rule complexity, but not for response suppression or reversal of stimulus-response contingencies. Process-specific, age-related differences in processing speed that support heterochronicity of brain development during childhood were revealed.
NASA Technical Reports Server (NTRS)
Haines, Jennifer C.; Chen, Lung-Wen A.; Taubman, Brett F.; Doddridge, Bruce G.; Dickerson, Russell R.
2007-01-01
Reliable determination of the effects of air quality on public health and the environment requires accurate measurement of PM(sub 2.5) mass and the individual chemical components of fine aerosols. This study seeks to evaluate PM(sub 2.5) measurements that are part of a newly established national network by comparing them with a more conventional sampling system. Experiments were carried out during 2002 at a suburban site in Maryland, United States, where two samplers from the U.S. Environmental Protection Agency (USEPA) Speciation Trends Network: Met One Speciation Air Sampling System STNS and Thermo Scientific Reference Ambient Air Sampler STNR, two Desert Research Institute Sequential Filter Samplers DRIF, and a continuous TEOM monitor (Thermo Scientific Tapered Element Oscillating Microbalance) were sampling air in parallel. These monitors differ not only in sampling configuration but also in protocol-specific sample analysis procedures. Measurements of PM(sub 2.5) mass and major contributing species were well correlated among the different methods with r-values > 0.8. Despite the good correlations, daily concentrations of PM(sub 2.5) mass and major contributing species were significantly different at the 95% confidence level from 5 to 100% of the time. Larger values of PM(sub 2.5) mass and individual species were generally reported from STNR and STNS. The January STNR average PM(sub 2.5) mass (8.8 (micro)g/per cubic meter) was 1.5 (micro)g/per cubic meter larger than the DRIF average mass. The July STNS average PM(sub 2.5) mass (27.8 (micro)g/per cubic meter) was 3.8 (micro)g/per cubic meter larger than the DRIF average mass. These differences can only be partially accounted for by known random errors. Variations in flow control, face velocity, and sampling artifacts likely influence the measurement of PM(sub 2.5) speciation and mass closure. Simple statistical tests indicate that the current uncertainty estimates used in the STN network may underestimate the actual uncertainty.
Artificial insemination for breeding non-domestic birds
Gee, G.F.; Temple, S.A.; Watson, P.F.
1978-01-01
Captive breeding of non-domestic birds has increased dramatically in this century, and production of young often exceeds that of the same number of birds in their native habitat. However, when infertility is a problem, artificial insemination can be a useful method to improve production. Artificial insemination programs with non-domestic birds are relatively recent, but several notable successes have been documented, especially with cranes and raptors. Three methods of artificial insemination are described--cooperative, massage, and electroejaculation. Cooperative artificial insemination requires training of birds imprinted on man and is used extensively in some raptor programs. The massage technique generally is used when there are larger numbers of birds to inseminate since it requires less training of the birds than with the cooperative method, and a larger number of attempted semen collections are successful. Although the best samples are obtained from birds conditioned to capture and handling procedures associated with the massage method, samples can be obtained from wild birds. Semen collection and insemination for the crane serves to illustrate some of the modifications necessary to compensate for anatomical variations. Collection of semen by electrical stimulation is not commonly used in birds. Unlike the other two methods which require behavioral cooperation by the bird, electroejaculation is possible in reproductively active birds without prior conditioning when properly restrained. Fertility from artificial insemination in captive non-domestic-birds has been good. Although some spermatozoal morphology has been reported, most aspects of morphology are not useful in predicting fertility. However, spermatozoal head length in the crane may have a positive correlation with fertility. Nevertheless, insemination with the largest number of live spermatozoa is still the best guarantee of fertile egg production.
Spreadsheet Simulation of the Law of Large Numbers
ERIC Educational Resources Information Center
Boger, George
2005-01-01
If larger and larger samples are successively drawn from a population and a running average calculated after each sample has been drawn, the sequence of averages will converge to the mean, [mu], of the population. This remarkable fact, known as the law of large numbers, holds true if samples are drawn from a population of discrete or continuous…
Evidence for a Global Sampling Process in Extraction of Summary Statistics of Item Sizes in a Set.
Tokita, Midori; Ueda, Sachiyo; Ishiguchi, Akira
2016-01-01
Several studies have shown that our visual system may construct a "summary statistical representation" over groups of visual objects. Although there is a general understanding that human observers can accurately represent sets of a variety of features, many questions on how summary statistics, such as an average, are computed remain unanswered. This study investigated sampling properties of visual information used by human observers to extract two types of summary statistics of item sets, average and variance. We presented three models of ideal observers to extract the summary statistics: a global sampling model without sampling noise, global sampling model with sampling noise, and limited sampling model. We compared the performance of an ideal observer of each model with that of human observers using statistical efficiency analysis. Results suggest that summary statistics of items in a set may be computed without representing individual items, which makes it possible to discard the limited sampling account. Moreover, the extraction of summary statistics may not necessarily require the representation of individual objects with focused attention when the sets of items are larger than 4.
Backhouse, Martin E
2002-01-01
A number of approaches to conducting economic evaluations could be adopted. However, some decision makers have a preference for wholly stochastic cost-effectiveness analyses, particularly if the sampled data are derived from randomised controlled trials (RCTs). Formal requirements for cost-effectiveness evidence have heightened concerns in the pharmaceutical industry that development costs and times might be increased if formal requirements increase the number, duration or costs of RCTs. Whether this proves to be the case or not will depend upon the timing, nature and extent of the cost-effectiveness evidence required. To illustrate how different requirements for wholly stochastic cost-effectiveness evidence could have a significant impact on two of the major determinants of new drug development costs and times, namely RCT sample size and study duration. Using data collected prospectively in a clinical evaluation, sample sizes were calculated for a number of hypothetical cost-effectiveness study design scenarios. The results were compared with a baseline clinical trial design. The sample sizes required for the cost-effectiveness study scenarios were mostly larger than those for the baseline clinical trial design. Circumstances can be such that a wholly stochastic cost-effectiveness analysis might not be a practical proposition even though its clinical counterpart is. In such situations, alternative research methodologies would be required. For wholly stochastic cost-effectiveness analyses, the importance of prior specification of the different components of study design is emphasised. However, it is doubtful whether all the information necessary for doing this will typically be available when product registration trials are being designed. Formal requirements for wholly stochastic cost-effectiveness evidence based on the standard frequentist paradigm have the potential to increase the size, duration and number of RCTs significantly and hence the costs and timelines associated with new product development. Moreover, it is possible to envisage situations where such an approach would be impossible to adopt. Clearly, further research is required into the issue of how to appraise the economic consequences of alternative economic evaluation research strategies.
Stucke, Kathrin; Kieser, Meinhard
2012-12-10
In the three-arm 'gold standard' non-inferiority design, an experimental treatment, an active reference, and a placebo are compared. This design is becoming increasingly popular, and it is, whenever feasible, recommended for use by regulatory guidelines. We provide a general method to calculate the required sample size for clinical trials performed in this design. As special cases, the situations of continuous, binary, and Poisson distributed outcomes are explored. Taking into account the correlation structure of the involved test statistics, the proposed approach leads to considerable savings in sample size as compared with application of ad hoc methods for all three scale levels. Furthermore, optimal sample size allocation ratios are determined that result in markedly smaller total sample sizes as compared with equal assignment. As optimal allocation makes the active treatment groups larger than the placebo group, implementation of the proposed approach is also desirable from an ethical viewpoint. Copyright © 2012 John Wiley & Sons, Ltd.
A self-sampling method to obtain large volumes of undiluted cervicovaginal secretions.
Boskey, Elizabeth R; Moench, Thomas R; Hees, Paul S; Cone, Richard A
2003-02-01
Studies of vaginal physiology and pathophysiology sometime require larger volumes of undiluted cervicovaginal secretions than can be obtained by current methods. A convenient method for self-sampling these secretions outside a clinical setting can facilitate such studies of reproductive health. The goal was to develop a vaginal self-sampling method for collecting large volumes of undiluted cervicovaginal secretions. A menstrual collection device (the Instead cup) was inserted briefly into the vagina to collect secretions that were then retrieved from the cup by centrifugation in a 50-ml conical tube. All 16 women asked to perform this procedure found it feasible and acceptable. Among 27 samples, an average of 0.5 g of secretions (range, 0.1-1.5 g) was collected. This is a rapid and convenient self-sampling method for obtaining relatively large volumes of undiluted cervicovaginal secretions. It should prove suitable for a wide range of assays, including those involving sexually transmitted diseases, microbicides, vaginal physiology, immunology, and pathophysiology.
VAPOR PRESSURE ISOTOPE EFFECTS IN THE MEASUREMENT OF ENVIRONMENTAL TRITIUM SAMPLES.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhne, W.
2012-12-03
Standard procedures for the measurement of tritium in water samples often require distillation of an appropriate sample aliquot. This distillation process may result in a fractionation of tritiated water and regular light water due to the vapor pressure isotope effect, introducing either a bias or an additional contribution to the total tritium measurement uncertainty. The magnitude of the vapor pressure isotope effect is characterized as functions of the amount of water distilled from the sample aliquot and the heat settings for the distillation process. The tritium concentration in the distillate is higher than the tritium concentration in the sample earlymore » in the distillation process, it then sharply decreases due to the vapor pressure isotope effect and becomes lower than the tritium concentration in the sample, until the high tritium concentration retained in the boiling flask is evaporated at the end of the process. At that time, the tritium concentration in the distillate again overestimates the sample tritium concentration. The vapor pressure isotope effect is more pronounced the slower the evaporation and distillation process is conducted; a lower heat setting during the evaporation of the sample results in a larger bias in the tritium measurement. The experimental setup used and the fact that the current study allowed for an investigation of the relative change in vapor pressure isotope effect in the course of the distillation process distinguish it from and extend previously published measurements. The separation factor as a quantitative measure of the vapor pressure isotope effect is found to assume values of 1.034 {+-} 0.033, 1.052 {+-} 0.025, and 1.066 {+-} 0.037, depending on the vigor of the boiling process during distillation of the sample. A lower heat setting in the experimental setup, and therefore a less vigorous boiling process, results in a larger value for the separation factor. For a tritium measurement in water samples, this implies that the tritium concentration could be underestimated by 3 - 6%.« less
Krivosheeva, Olga; Dedinaite, Andra; Claesson, Per M
2013-10-15
Mussel adhesive proteins are of great interest in many applications due to their ability to bind strongly to many types of surfaces under water. Effective use such proteins, for instance the Mytilus edulis foot protein - Mefp-1, for surface modification requires achievement of a large adsorbed amount and formation of a layer that is resistant towards desorption under changing conditions. In this work we compare the adsorbed amount and layer properties obtained by using a sample containing small Mefp-1 aggregates with that obtained by using a non-aggregated sample. We find that the use of the sample containing small aggregates leads to higher adsorbed amount, larger layer thickness and similar water content compared to what can be achieved with a non-aggregated sample. The layer formed by the aggregated Mefp-1 was, after removal of the protein from bulk solution, exposed to aqueous solutions with high ionic strength (up to 1M NaCl) and to solutions with low pH in order to reduce the electrostatic surface affinity. It was found that the preadsorbed Mefp-1 layer under all conditions explored was significantly more resistant towards desorption than a layer built by a synthetic cationic polyelectrolyte with similar charge density. These results suggest that the non-electrostatic surface affinity for Mefp-1 is larger than for the cationic polyelectrolyte. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
Sampling for mercury at subnanogram per litre concentrations for load estimation in rivers
Colman, J.A.; Breault, R.F.
2000-01-01
Estimation of constituent loads in streams requires collection of stream samples that are representative of constituent concentrations, that is, composites of isokinetic multiple verticals collected along a stream transect. An all-Teflon isokinetic sampler (DH-81) cleaned in 75??C, 4 N HCl was tested using blank, split, and replicate samples to assess systematic and random sample contamination by mercury species. Mean mercury concentrations in field-equipment blanks were low: 0.135 ng??L-1 for total mercury (??Hg) and 0.0086 ng??L-1 for monomethyl mercury (MeHg). Mean square errors (MSE) for ??Hg and MeHg duplicate samples collected at eight sampling stations were not statistically different from MSE of samples split in the laboratory, which represent the analytical and splitting error. Low fieldblank concentrations and statistically equal duplicate- and split-sample MSE values indicate that no measurable contamination was occurring during sampling. Standard deviations associated with example mercury load estimations were four to five times larger, on a relative basis, than standard deviations calculated from duplicate samples, indicating that error of the load determination was primarily a function of the loading model used, not of sampling or analytical methods.
Optical design considerations when imaging the fundus with an adaptive optics correction
NASA Astrophysics Data System (ADS)
Wang, Weiwei; Campbell, Melanie C. W.; Kisilak, Marsha L.; Boyd, Shelley R.
2008-06-01
Adaptive Optics (AO) technology has been used in confocal scanning laser ophthalmoscopes (CSLO) which are analogous to confocal scanning laser microscopes (CSLM) with advantages of real-time imaging, increased image contrast, a resistance to image degradation by scattered light, and improved optical sectioning. With AO, the instrumenteye system can have low enough aberrations for the optical quality to be limited primarily by diffraction. Diffraction-limited, high resolution imaging would be beneficial in the understanding and early detection of eye diseases such as diabetic retinopathy. However, to maintain diffraction-limited imaging, sufficient pixel sampling over the field of view is required, resulting in the need for increased data acquisition rates for larger fields. Imaging over smaller fields may be a disadvantage with clinical subjects because of fixation instability and the need to examine larger areas of the retina. Reduction in field size also reduces the amount of light sampled per pixel, increasing photon noise. For these reasons, we considered an instrument design with a larger field of view. When choosing scanners to be used in an AOCSLO, the ideal frame rate should be above the flicker fusion rate for the human observer and would also allow user control of targets projected onto the retina. In our AOCSLO design, we have studied the tradeoffs between field size, frame rate and factors affecting resolution. We will outline optical approaches to overcome some of these tradeoffs and still allow detection of the earliest changes in the fundus in diabetic retinopathy.
[Effective interventions to reduce absenteeism among hospital nurses].
Blanca-Gutiérrez, Joaquín Jesús; Jiménez-Díaz, María del Carmen; Escalera-Franco, Luis Felipe
2013-01-01
To select and summarize the interventions that have proved effective in reducing absenteeism among hospital nurses. A scoping review was conducted through a literature search using Medline, Web of Science, Cinahl, Embase, Lilacs, Cuiden and Cochrane Library Plus databases. Of a total of 361 articles extracted, 15 were finally selected for this review. The implementation of multifaceted support or physical training programs can produce positive results in terms of reducing absenteeism among hospital nurses. Cognitive-behavioral type interventions require studies with larger samples to provide conclusive results. Establishing more flexible working shifts may also reduce absenteeism rates, although again studies with larger samples are needed. Programs aimed at managing change developed by nurses themselves, participatory management of professional relations, the support provided by supervisors who are opposed to hierarchical leadership styles, and wage supplements that reward the lack of absence can also reduce these types of indicators. Absenteeism can be considered as a final result and a consequence of the level of job satisfaction. The effectiveness of interventions to reduce absenteeism among hospital nurses will no doubt largely depend on the ability of these interventions to increase the job satisfaction of these workers. Copyright © 2012 SESPAS. Published by Elsevier Espana. All rights reserved.
Formation of a xerogel in reduced gravity using the acid catalysed silica sol-gel reaction
NASA Astrophysics Data System (ADS)
Pienaar, Christine L.; Steinberg, Theodore A.
2006-01-01
An acid catalysed silica sol-gel reaction was used to create a xerogel in reduced gravity. Samples were formed in a special apparatus which utilised vacuum and heating to speed up the gelation process. Testing was conducted aboard NASA's KC-135 aircraft which flies a parabolic trajectory, producing a series of 25 second reduced gravity periods. The samples formed in reduced gravity were compared against a control sample formed in normal gravity. 29Si NMR and nitrogen adsorption/desorption techniques yielded information on the molecular and physical structure of the xerogels. The microstructure of the reduced gravity samples contained more Q 4 groups and less Q 3 and Q2 groups than the control sample. The pore size of the reduced gravity samples was also larger than the control sample. This indicated that in a reduced gravity environment, where convection is lessened due to the removal of buoyancy forces, the microstructure formed through cyclisation reactions rather than bimolecularisation reactions. The latter requires the movement of molecules for reactions to occur whereas cyclisation only requires a favourable configuration. Q 4 groups are stabilised when contained in a ring structure and are unlikely to undergo repolymerisation. Thus reduced gravity favoured the formation of a xerogel through cyclisation, producing a structure with more highly coordinated Q groups. The xerogel formed in normal gravity contained both chain and ring structures as bimolecularisation reactions were able to effectively compete with cyclisation.
Pham-Tuan, Hai; Kaskavelis, Lefteris; Daykin, Clare A; Janssen, Hans-Gerd
2003-06-15
"Metabonomics" has in the past decade demonstrated enormous potential in furthering the understanding of, for example, disease processes, toxicological mechanisms, and biomarker discovery. The same principles can also provide a systematic and comprehensive approach to the study of food ingredient impact on consumer health. However, "metabonomic" methodology requires the development of rapid, advanced analytical tools to comprehensively profile biofluid metabolites within consumers. Until now, NMR spectroscopy has been used for this purpose almost exclusively. Chromatographic techniques and in particular HPLC, have not been exploited accordingly. The main drawbacks of chromatography are the long analysis time, instabilities in the sample fingerprint and the rigorous sample preparation required. This contribution addresses these problems in the quest to develop generic methods for high-throughput profiling using HPLC. After a careful optimization process, stable fingerprints of biofluid samples can be obtained using standard HPLC equipment. A method using a short monolithic column and a rapid gradient with a high flow-rate has been developed that allowed rapid and detailed profiling of larger numbers of urine samples. The method can be easily translated into a slow, shallow-gradient high-resolution method for identification of interesting peaks by LC-MS/NMR. A similar approach has been applied for cell culture media samples. Due to the much higher protein content of such samples non-porous polymer-based small particle columns yielded the best results. The study clearly shows that HPLC can be used in metabonomic fingerprinting studies.
A System for Cost and Reimbursement Control in Hospitals
Fetter, Robert B.; Thompson, John D.; Mills, Ronald E.
1976-01-01
This paper approaches the design of a regional or statewide hospital rate-setting system as the underpinning of a larger system which permits a regulatory agency to satisfy the requirements of various public laws now on the books or in process. It aims to generate valid interinstitutional monitoring on the three parameters of cost, utilization, and quality review. Such an approach requires the extension of the usual departmental cost and budgeting system to include consideration of the mix of patients treated and the utilization of various resources, including patient days, in the treatment of these patients. A sampling framework for the application of process-based quality studies and the generation of selected performance measurements is also included. PMID:941461
White, James M.; Faber, Vance; Saltzman, Jeffrey S.
1992-01-01
An image population having a large number of attributes is processed to form a display population with a predetermined smaller number of attributes which represent the larger number of attributes. In a particular application, the color values in an image are compressed for storage in a discrete lookup table (LUT) where an 8-bit data signal is enabled to form a display of 24-bit color values. The LUT is formed in a sampling and averaging process from the image color values with no requirement to define discrete Voronoi regions for color compression. Image color values are assigned 8-bit pointers to their closest LUT value whereby data processing requires only the 8-bit pointer value to provide 24-bit color values from the LUT.
Carey, Roger Neill; Jani, Chinu; Johnson, Curtis; Pearce, Jim; Hui-Ng, Patricia; Lacson, Eduardo
2016-09-07
Plasma samples collected in tubes containing separator gels have replaced serum samples for most chemistry tests in many hospital and commercial laboratories. Use of plasma samples for blood tests in the dialysis population eliminates delays in sample processing while waiting for clotting to complete, laboratory technical issues associated with fibrin formation, repeat sample collection, and patient care issues caused by delay of results because of incompletely clotted specimens. Additionally, a larger volume of plasma is produced than serum for the same amount of blood collected. Plasma samples are also acceptable for most chemical tests involved in the care of patients with ESRD. This information becomes very important when United States regulatory requirements for ESRD inadvertently limit the type of sample that can be used for government reporting, quality assessment, and value-based payment initiatives. In this narrative, we summarize the renal community experience and how the subsequent resolution of the acceptability of phosphorus levels measured from serum and plasma samples may have significant implications in the country's continued development of a value-based Medicare ESRD Quality Incentive Program. Copyright © 2016 by the American Society of Nephrology.
An anthropometric analysis of Korean male helicopter pilots for helicopter cockpit design.
Lee, Wonsup; Jung, Kihyo; Jeong, Jeongrim; Park, Jangwoon; Cho, Jayoung; Kim, Heeeun; Park, Seikwon; You, Heecheon
2013-01-01
This study measured 21 anthropometric dimensions (ADs) of 94 Korean male helicopter pilots in their 20s to 40s and compared them with corresponding measurements of Korean male civilians and the US Army male personnel. The ADs and the sample size of the anthropometric survey were determined by a four-step process: (1) selection of ADs related to helicopter cockpit design, (2) evaluation of the importance of each AD, (3) calculation of required sample sizes for selected precision levels and (4) determination of an appropriate sample size by considering both the AD importance evaluation results and the sample size requirements. The anthropometric comparison reveals that the Korean helicopter pilots are larger (ratio of means = 1.01-1.08) and less dispersed (ratio of standard deviations = 0.71-0.93) than the Korean male civilians and that they are shorter in stature (0.99), have shorter upper limbs (0.89-0.96) and lower limbs (0.93-0.97), but are taller on sitting height, sitting eye height and acromial height (1.01-1.03), and less dispersed (0.68-0.97) than the US Army personnel. The anthropometric characteristics of Korean male helicopter pilots were compared with those of Korean male civilians and US Army male personnel. The sample size determination process and the anthropometric comparison results presented in this study are useful to design an anthropometric survey and a helicopter cockpit layout, respectively.
NASA Astrophysics Data System (ADS)
Burritt, Rosemary; Francois, Elizabeth; Windler, Gary; Chavez, David
2017-06-01
Diaminoazoxyfurazan (DAAF) has many of the safety characteristics of an insensitive high explosive (IHE): it is extremely insensitive to impact and friction and is comparable to triaminotrinitrobezene (TATB) in this way. Conversely, it demonstrates many performance characteristics of a Conventional High Explosive (CHE). DAAF has a small failure diameter of about 1.25 mm and can be sensitive to shock under the right conditions. Large particle sized DAAF will not initiate in a typical exploding foil initiator (EFI) configuration but smaller particle sizes will. Large particle sized DAAF, of 40 μm, was crash precipitated and ball milled into six distinct samples and pressed into pellets with a density of 1.60 g/cc (91% TMD). To investigate the effect of particle size and surface area on the direct initiation on DAAF multiple threshold tests were preformed on each sample of DAAF in different EFI configurations, which varied in flyer thickness and/or bridge size. Comparative tests were performed examining threshold voltage and correlated to Photon Doppler Velocimetry (PDV) results. The samples with larger particle sizes and surface area required more energy to initiate while the smaller particle sizes required less energy and could be initiated with smaller diameter flyers.
Measuring larval nematode contamination on cattle pastures: Comparing two herbage sampling methods.
Verschave, S H; Levecke, B; Duchateau, L; Vercruysse, J; Charlier, J
2015-06-15
Assessing levels of pasture larval contamination is frequently used to study the population dynamics of the free-living stages of parasitic nematodes of livestock. Direct quantification of infective larvae (L3) on herbage is the most applied method to measure pasture larval contamination. However, herbage collection remains labour intensive and there is a lack of studies addressing the variation induced by the sampling method and the required sample size. The aim of this study was (1) to compare two different sampling methods in terms of pasture larval count results and time required to sample, (2) to assess the amount of variation in larval counts at the level of sample plot, pasture and season, respectively and (3) to calculate the required sample size to assess pasture larval contamination with a predefined precision using random plots across pasture. Eight young stock pastures of different commercial dairy herds were sampled in three consecutive seasons during the grazing season (spring, summer and autumn). On each pasture, herbage samples were collected through both a double-crossed W-transect with samples taken every 10 steps (method 1) and four random located plots of 0.16 m(2) with collection of all herbage within the plot (method 2). The average (± standard deviation (SD)) pasture larval contamination using sampling methods 1 and 2 was 325 (± 479) and 305 (± 444)L3/kg dry herbage (DH), respectively. Large discrepancies in pasture larval counts of the same pasture and season were often seen between methods, but no significant difference (P = 0.38) in larval counts between methods was found. Less time was required to collect samples with method 2. This difference in collection time between methods was most pronounced for pastures with a surface area larger than 1 ha. The variation in pasture larval counts from samples generated by random plot sampling was mainly due to the repeated measurements on the same pasture in the same season (residual variance component = 6.2), rather than due to pasture (variance component = 0.55) or season (variance component = 0.15). Using the observed distribution of L3, the required sample size (i.e. number of plots per pasture) for sampling a pasture through random plots with a particular precision was simulated. A higher relative precision was acquired when estimating PLC on pastures with a high larval contamination and a low level of aggregation compared to pastures with a low larval contamination when the same sample size was applied. In the future, herbage sampling through random plots across pasture (method 2) seems a promising method to develop further as no significant difference in counts between the methods was found and this method was less time consuming. Copyright © 2015 Elsevier B.V. All rights reserved.
Microbiological evaluation of groups of beef carcasses: heifers and steers.
Jericho, K W; Bradley, J A; Kozub, G C
1994-01-01
Numbers of mesophilic bacteria were estimated on carcasses of 25 heifers and 25 steers of beef breeds in a modern, high-line-speed abattoir. One side of each carcass from each sex was sampled at the end of the kill-floor, before the carcass wash, on each of 25 visits. Two adjacent excision samples (5 x 5 x 0.5 cm) were taken from each of ten sites and processed for automatic enumeration of aerobic bacteria on hydrophobic grid membrane filters. The effects of sex and carcass weight on bacterial counts were examined. Groups of carcasses were examined to determine the sample size required for future assessments of kill-floor hygiene. The log10 of the most probable number of growth units (MPNGU)/cm2 did not differ significantly between heifers and steers (average over the ten sites of 2.2) and there was no effect of carcass weight on bacterial counts for nine of the ten sites. There were, however, highly significant (p < 0.001) differences in the counts between sites and the counts from the ten sites clustered into five homogenous groups. The between-carcass component of variation at a site was generally larger than the within-carcass component. We conclude that, to estimate the mean log10 MPNGU/cm2 at a site to within +/- 0.5 units, future group-carcass evaluations require about 200 samples from 10 (two adjacent samples/site) or 20 carcasses (one sample/site). PMID:7954120
An interferometric fiber optic hydrophone with large upper limit of dynamic range
NASA Astrophysics Data System (ADS)
Zhang, Lei; Kan, Baoxi; Zheng, Baichao; Wang, Xuefeng; Zhang, Haiyan; Hao, Liangbin; Wang, Hailiang; Hou, Zhenxing; Yu, Wenpeng
2017-10-01
Interferometric fiber optic hydrophone based on heterodyne detection is used to measure the missile dropping point in the sea. The signal caused by the missile dropping in the water will be too large to be detected, so it is necessary to boost the upper limit of dynamic range (ULODR) of fiber optic hydrophone. In this article we analysis the factors which influence the ULODR of fiber optic hydrophone based on heterodyne detection, the ULODR is decided by the sampling frequency fsam and the heterodyne frequency Δf. The sampling frequency and the heterodyne frequency should be satisfied with the Nyquist sampling theorem which fsam will be two times larger than Δf, in this condition the ULODR is depended on the heterodyne frequency. In order to enlarge the ULODR, the Nyquist sampling theorem was broken, and we proposed a fiber optic hydrophone which the heterodyne frequency is larger than the sampling frequency. Both the simulation and experiment were done in this paper, the consequences are similar: When the sampling frequency is 100kHz, the ULODR of large heterodyne frequency fiber optic hydrophone is 2.6 times larger than that of the small heterodyne frequency fiber optic hydrophone. As the heterodyne frequency is larger than the sampling frequency, the ULODR is depended on the sampling frequency. If the sampling frequency was set at 2MHz, the ULODR of fiber optic hydrophone based on heterodyne detection will be boosted to 1000rad at 1kHz, and this large heterodyne fiber optic hydrophone can be applied to locate the drop position of the missile in the sea.
Spatial Sampling of Weather Data for Regional Crop Yield Simulations
NASA Technical Reports Server (NTRS)
Van Bussel, Lenny G. J.; Ewert, Frank; Zhao, Gang; Hoffmann, Holger; Enders, Andreas; Wallach, Daniel; Asseng, Senthold; Baigorria, Guillermo A.; Basso, Bruno; Biernath, Christian;
2016-01-01
Field-scale crop models are increasingly applied at spatio-temporal scales that range from regions to the globe and from decades up to 100 years. Sufficiently detailed data to capture the prevailing spatio-temporal heterogeneity in weather, soil, and management conditions as needed by crop models are rarely available. Effective sampling may overcome the problem of missing data but has rarely been investigated. In this study the effect of sampling weather data has been evaluated for simulating yields of winter wheat in a region in Germany over a 30-year period (1982-2011) using 12 process-based crop models. A stratified sampling was applied to compare the effect of different sizes of spatially sampled weather data (10, 30, 50, 100, 500, 1000 and full coverage of 34,078 sampling points) on simulated wheat yields. Stratified sampling was further compared with random sampling. Possible interactions between sample size and crop model were evaluated. The results showed differences in simulated yields among crop models but all models reproduced well the pattern of the stratification. Importantly, the regional mean of simulated yields based on full coverage could already be reproduced by a small sample of 10 points. This was also true for reproducing the temporal variability in simulated yields but more sampling points (about 100) were required to accurately reproduce spatial yield variability. The number of sampling points can be smaller when a stratified sampling is applied as compared to a random sampling. However, differences between crop models were observed including some interaction between the effect of sampling on simulated yields and the model used. We concluded that stratified sampling can considerably reduce the number of required simulations. But, differences between crop models must be considered as the choice for a specific model can have larger effects on simulated yields than the sampling strategy. Assessing the impact of sampling soil and crop management data for regional simulations of crop yields is still needed.
Schall, Carol M; Wehman, Paul; Brooke, Valerie; Graham, Carolyn; McDonough, Jennifer; Brooke, Alissa; Ham, Whitney; Rounds, Rachael; Lau, Stephanie; Allen, Jaclyn
2015-12-01
This paper presents findings from a retrospective observational records review study that compares the outcomes associated with implementation of supported employment (SE) with and without prior Project SEARCH with ASD Supports (PS-ASD) on wages earned, time spent in intervention, and job retention. Results suggest that SE resulted in competitive employment for 45 adults with ASD. Twenty-five individuals received prior intervention through PS-ASD while the other 20 individuals received SE only. Individuals in this sample who received PS-ASD required fewer hours of intervention. Additionally, individuals in the PS-ASD group achieved a mean higher wage and had higher retention rates than their peers who received SE only. Further research with a larger sample is needed to confirm these findings.
[Influence of multiple sintering on wear behavior of Cercon veneering ceramic].
Gao, Qing-ping; Chao, Yong-lie; Jian, Xin-chun; Guo, Feng
2010-04-01
To investigate the influence of multiple sintering on wear behavior of Cercon veneering ceramic. Samples were fabricated according to the manufacture's requirement for different sintering times (1, 3, 5, 7 times). The wear test was operated with a modified MM-200 friction and wear machine in vitro. The wear scars were characterized by scanning electron microscope (SEM) and atomic force microscopy (AFM). With the sintering times increasing, the wear scar width became larger. The correlation was significant at the 0.01 level. Significant difference was observed in wear scar width among different samples (P < 0.05). SEM and AFM results showed that veneering ceramic wear facets demonstrated grooves characteristic of abrasive wear. Multiple sintering can decrease the wear ability of Cercon veneer, and the wear pattern has the tendency to severe wear.
Phase-contrast x-ray computed tomography for biological imaging
NASA Astrophysics Data System (ADS)
Momose, Atsushi; Takeda, Tohoru; Itai, Yuji
1997-10-01
We have shown so far that 3D structures in biological sot tissues such as cancer can be revealed by phase-contrast x- ray computed tomography using an x-ray interferometer. As a next step, we aim at applications of this technique to in vivo observation, including radiographic applications. For this purpose, the size of view field is desired to be more than a few centimeters. Therefore, a larger x-ray interferometer should be used with x-rays of higher energy. We have evaluated the optimal x-ray energy from an aspect of does as a function of sample size. Moreover, desired spatial resolution to an image sensor is discussed as functions of x-ray energy and sample size, basing on a requirement in the analysis of interference fringes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maxwell, S.; Jones, V.
2009-05-27
A new rapid separation method that allows separation and preconcentration of actinides in urine samples was developed for the measurement of longer lived actinides by inductively coupled plasma mass spectrometry (ICP-MS) and short-lived actinides by alpha spectrometry; a hybrid approach. This method uses stacked extraction chromatography cartridges and vacuum box technology to facilitate rapid separations. Preconcentration, if required, is performed using a streamlined calcium phosphate precipitation. Similar technology has been applied to separate actinides prior to measurement by alpha spectrometry, but this new method has been developed with elution reagents now compatible with ICP-MS as well. Purified solutions are splitmore » between ICP-MS and alpha spectrometry so that long- and short-lived actinide isotopes can be measured successfully. The method allows for simultaneous extraction of 24 samples (including QC samples) in less than 3 h. Simultaneous sample preparation can offer significant time savings over sequential sample preparation. For example, sequential sample preparation of 24 samples taking just 15 min each requires 6 h to complete. The simplicity and speed of this new method makes it attractive for radiological emergency response. If preconcentration is applied, the method is applicable to larger sample aliquots for occupational exposures as well. The chemical recoveries are typically greater than 90%, in contrast to other reported methods using flow injection separation techniques for urine samples where plutonium yields were 70-80%. This method allows measurement of both long-lived and short-lived actinide isotopes. 239Pu, 242Pu, 237Np, 243Am, 234U, 235U and 238U were measured by ICP-MS, while 236Pu, 238Pu, 239Pu, 241Am, 243Am and 244Cm were measured by alpha spectrometry. The method can also be adapted so that the separation of uranium isotopes for assay is not required, if uranium assay by direct dilution of the urine sample is preferred instead. Multiple vacuum box locations may be set-up to supply several ICP-MS units with purified sample fractions such that a high sample throughput may be achieved, while still allowing for rapid measurement of short-lived actinides by alpha spectrometry.« less
A novel atmospheric tritium sampling system
NASA Astrophysics Data System (ADS)
Qin, Lailai; Xia, Zhenghai; Gu, Shaozhong; Zhang, Dongxun; Bao, Guangliang; Han, Xingbo; Ma, Yuhua; Deng, Ke; Liu, Jiayu; Zhang, Qin; Ma, Zhaowei; Yang, Guo; Liu, Wei; Liu, Guimin
2018-06-01
The health hazard of tritium is related to its chemical form. Sampling different chemical forms of tritium simultaneously becomes significant. Here a novel atmospheric tritium sampling system (TS-212) was developed to collect the tritiated water (HTO), tritiated hydrogen (HT) and tritiated methane (CH3T) simultaneously. It consisted of an air inlet system, three parallel connected sampling channels, a hydrogen supply module, a methane supply module and a remote control system. It worked at air flow rate of 1 L/min to 5 L/min, with temperature of catalyst furnace at 200 °C for HT sampling and 400 °C for CH3T sampling. Conversion rates of both HT and CH3T to HTO were larger than 99%. The collecting efficiency of the two-stage trap sets for HTO was larger than 96% in 12 h working-time without being blocked. Therefore, the collected efficiencies of TS-212 are larger than 95% for tritium with different chemical forms in environment. Besides, the remote control system made sampling more intelligent, reducing the operator's work intensity. Based on the performance parameters described above, the TS-212 can be used to sample atmospheric tritium in different chemical forms.
Polarimetric imaging of biological tissues based on the indices of polarimetric purity.
Van Eeckhout, Albert; Lizana, Angel; Garcia-Caurel, Enric; Gil, José J; Sansa, Adrià; Rodríguez, Carla; Estévez, Irene; González, Emilio; Escalera, Juan C; Moreno, Ignacio; Campos, Juan
2018-04-01
We highlight the interest of using the indices of polarimetric purity (IPPs) to the inspection of biological tissues. The IPPs were recently proposed in the literature and they result in a further synthetization of the depolarizing properties of samples. Compared with standard polarimetric images of biological samples, IPP-based images lead to larger image contrast of some biological structures and to a further physical interpretation of the depolarizing mechanisms inherent to the samples. In addition, unlike other methods, their calculation do not require advanced algebraic operations (as is the case of polar decompositions), and they result in 3 indicators of easy implementation. We also propose a pseudo-colored encoding of the IPP information that leads to an improved visualization of samples. This last technique opens the possibility of tailored adjustment of tissues contrast by using customized pseudo-colored images. The potential of the IPP approach is experimentally highlighted along the manuscript by studying 3 different ex-vivo samples. A significant image contrast enhancement is obtained by using the IPP-based methods, compared to standard polarimetric images. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Dombrowski, Kirk; Khan, Bilal; Wendel, Travis; McLean, Katherine; Misshula, Evan; Curtis, Ric
2012-12-01
As part of a recent study of the dynamics of the retail market for methamphetamine use in New York City, we used network sampling methods to estimate the size of the total networked population. This process involved sampling from respondents' list of co-use contacts, which in turn became the basis for capture-recapture estimation. Recapture sampling was based on links to other respondents derived from demographic and "telefunken" matching procedures-the latter being an anonymized version of telephone number matching. This paper describes the matching process used to discover the links between the solicited contacts and project respondents, the capture-recapture calculation, the estimation of "false matches", and the development of confidence intervals for the final population estimates. A final population of 12,229 was estimated, with a range of 8235 - 23,750. The techniques described here have the special virtue of deriving an estimate for a hidden population while retaining respondent anonymity and the anonymity of network alters, but likely require larger sample size than the 132 persons interviewed to attain acceptable confidence levels for the estimate.
Fast IRMS screening of pseudoendogenous steroids in doping analyses.
de la Torre, Xavier; Colamonici, Cristiana; Curcio, Davide; Botrè, Francesco
2017-11-01
The detection of the abuse of pseudoendogenous steroids (testosterone and/or its precursors) is currently based, when possible, on the application of the steroid module of the World Anti-Doping Agency (WADA), athlete biological passport (ABP), implemented through the global database, ADAMS. When a suspicious sample is detected, the confirmation by isotope ratio mass spectrometry (IRMS) is required. It is well known that this confirmation procedure is time consuming and expensive and can be only applied on a reduced number of samples. In previous studies we have demonstrated that the longitudinal evaluation of the IRMS data is able to detect positive samples that otherwise will be evaluated as negative, improving the efficacy of the fight against doping in sport. This would require the analysis of a much larger volume of samples by IRMS. The aim of the present work is to describe an IRMS screening method allowing to increase the throughput of samples that can be analyzed by IRMS. The detection efficacy of the method is compared with the confirmation method in use, and to assess its robustness and applicability, all the samples of a major cycling stage competition were analyzed, with the agreement of the testing authority, under routine conditions and response times. The results obtained permit to conclude that the IRMS screening method here proposed has adequate selectivity and produces results that overlap with the already validated method currently in use permitting to analyze a much higher volume of samples even during a major event without compromising the detection capacity. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Sakakibara, Ryota; Kitahara, Mizuho
2016-06-01
This study aimed to investigate the relations between CERQ and depression, and anxiety and also aimed to reveal the characteristics of a Japanese sample through meta-analysis. The results showed that self-blame, acceptance, rumination, catastrophizing, and blaming others had significantly positive correlations with both depression and anxiety, whereas positive refocusing, refocus on planning, positive reappraisal, and putting into perspective had significantly negative correlations with both variables. Moreover, when comparing the correlation coefficients of the Japanese samples and the combined value, correlations between depression and positive reappraisal were significantly larger than the combined value. On the other hand, regarding the correlation coefficients of depression and putting into perspective, the combined value was larger than the value of Japanese samples. In addition, compared to the combined value, the Japanese sample's positive correlation between anxiety and rumination, and negative correlation between anxiety and positive reappraisal were larger.
Sampling Mars: Analytical requirements and work to do in advance
NASA Technical Reports Server (NTRS)
Koeberl, Christian
1988-01-01
Sending a mission to Mars to collect samples and return them to the Earth for analysis is without doubt one of the most exciting and important tasks for planetary science in the near future. Many scientifically important questions are associated with the knowledge of the composition and structure of Martian samples. Amongst the most exciting questions is the clarification of the SNC problem- to prove or disprove a possible Martian origin of these meteorites. Since SNC meteorites have been used to infer the chemistry of the planet Mars, and its evolution (including the accretion history), it would be important to know if the whole story is true. But before addressing possible scientific results, we have to deal with the analytical requirements, and with possible pre-return work. It is unlikely to expect that a possible Mars sample return mission will bring back anything close to the amount returned by the Apollo missions. It will be more like the amount returned by the Luna missions, or at least in that order of magnitude. This requires very careful sample selection, and very precise analytical techniques. These techniques should be able to use minimal sample sizes and on the other hand optimize the scientific output. The possibility to work with extremely small samples should not obstruct another problem: possible sampling errors. As we know from terrestrial geochemical studies, sampling procedures are quite complicated and elaborate to ensure avoiding sampling errors. The significance of analyzing a milligram or submilligram sized sample and putting that in relationship with the genesis of whole planetary crusts has to be viewed with care. This leaves a dilemma on one hand, to minimize the sample size as far as possible in order to have the possibility of returning as many different samples as possible, and on the other hand to take a sample large enough to be representative. Whole rock samples are very useful, but should not exceed the 20 to 50 g range, except in cases of extreme inhomogeneity, because for larger samples the information tends to become redundant. Soil samples should be in the 2 to 10 g range, permitting the splitting of the returned samples for studies in different laboratories with variety of techniques.
Biofeedback and dance performance: a preliminary investigation.
Raymond, Joshua; Sajid, Imran; Parkinson, Lesley A; Gruzelier, John H
2005-03-01
Alpha-theta neurofeedback has been shown to produce professionally significant performance improvements in music students. The present study aimed to extend this work to a different performing art and compare alpha-theta neurofeedback with another form of biofeedback: heart rate variability (HRV) biofeedback. Twenty-four ballroom and Latin dancers were randomly allocated to three groups, one receiving neurofeedback, one HRV biofeedback and one no intervention. Dance was assessed before and after training. Performance improvements were found in the biofeedback groups but not in the control group. Neurofeedback and HRV biofeedback benefited performance in different ways. A replication with larger sample sizes is required.
Spineli, Loukia M; Jenz, Eva; Großhennig, Anika; Koch, Armin
2017-08-17
A number of papers have proposed or evaluated the delayed-start design as an alternative to the standard two-arm parallel group randomized clinical trial (RCT) design in the field of rare disease. However the discussion is felt to lack a sufficient degree of consideration devoted to the true virtues of the delayed start design and the implications either in terms of required sample-size, overall information, or interpretation of the estimate in the context of small populations. To evaluate whether there are real advantages of the delayed-start design particularly in terms of overall efficacy and sample size requirements as a proposed alternative to the standard parallel group RCT in the field of rare disease. We used a real-life example to compare the delayed-start design with the standard RCT in terms of sample size requirements. Then, based on three scenarios regarding the development of the treatment effect over time, the advantages, limitations and potential costs of the delayed-start design are discussed. We clarify that delayed-start design is not suitable for drugs that establish an immediate treatment effect, but for drugs with effects developing over time, instead. In addition, the sample size will always increase as an implication for a reduced time on placebo resulting in a decreased treatment effect. A number of papers have repeated well-known arguments to justify the delayed-start design as appropriate alternative to the standard parallel group RCT in the field of rare disease and do not discuss the specific needs of research methodology in this field. The main point is that a limited time on placebo will result in an underestimated treatment effect and, in consequence, in larger sample size requirements compared to those expected under a standard parallel-group design. This also impacts on benefit-risk assessment.
Sayers, Adrian; Crowther, Michael J; Judge, Andrew; Whitehouse, Michael R; Blom, Ashley W
2017-08-28
The use of benchmarks to assess the performance of implants such as those used in arthroplasty surgery is a widespread practice. It provides surgeons, patients and regulatory authorities with the reassurance that implants used are safe and effective. However, it is not currently clear how or how many implants should be statistically compared with a benchmark to assess whether or not that implant is superior, equivalent, non-inferior or inferior to the performance benchmark of interest.We aim to describe the methods and sample size required to conduct a one-sample non-inferiority study of a medical device for the purposes of benchmarking. Simulation study. Simulation study of a national register of medical devices. We simulated data, with and without a non-informative competing risk, to represent an arthroplasty population and describe three methods of analysis (z-test, 1-Kaplan-Meier and competing risks) commonly used in surgical research. We evaluate the performance of each method using power, bias, root-mean-square error, coverage and CI width. 1-Kaplan-Meier provides an unbiased estimate of implant net failure, which can be used to assess if a surgical device is non-inferior to an external benchmark. Small non-inferiority margins require significantly more individuals to be at risk compared with current benchmarking standards. A non-inferiority testing paradigm provides a useful framework for determining if an implant meets the required performance defined by an external benchmark. Current contemporary benchmarking standards have limited power to detect non-inferiority, and substantially larger samples sizes, in excess of 3200 procedures, are required to achieve a power greater than 60%. It is clear when benchmarking implant performance, net failure estimated using 1-KM is preferential to crude failure estimated by competing risk models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Sampling and data handling methods for inhalable particulate sampling. Final report nov 78-dec 80
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, W.B.; Cushing, K.M.; Johnson, J.W.
1982-05-01
The report reviews the objectives of a research program on sampling and measuring particles in the inhalable particulate (IP) size range in emissions from stationary sources, and describes methods and equipment required. A computer technique was developed to analyze data on particle-size distributions of samples taken with cascade impactors from industrial process streams. Research in sampling systems for IP matter included concepts for maintaining isokinetic sampling conditions, necessary for representative sampling of the larger particles, while flowrates in the particle-sizing device were constant. Laboratory studies were conducted to develop suitable IP sampling systems with overall cut diameters of 15 micrometersmore » and conforming to a specified collection efficiency curve. Collection efficiencies were similarly measured for a horizontal elutriator. Design parameters were calculated for horizontal elutriators to be used with impactors, the EPA SASS train, and the EPA FAS train. Two cyclone systems were designed and evaluated. Tests on an Andersen Size Selective Inlet, a 15-micrometer precollector for high-volume samplers, showed its performance to be with the proposed limits for IP samplers. A stack sampling system was designed in which the aerosol is diluted in flow patterns and with mixing times simulating those in stack plumes.« less
The effects of spatial sampling choices on MR temperature measurements.
Todd, Nick; Vyas, Urvi; de Bever, Josh; Payne, Allison; Parker, Dennis L
2011-02-01
The purpose of this article is to quantify the effects that spatial sampling parameters have on the accuracy of magnetic resonance temperature measurements during high intensity focused ultrasound treatments. Spatial resolution and position of the sampling grid were considered using experimental and simulated data for two different types of high intensity focused ultrasound heating trajectories (a single point and a 4-mm circle) with maximum measured temperature and thermal dose volume as the metrics. It is demonstrated that measurement accuracy is related to the curvature of the temperature distribution, where regions with larger spatial second derivatives require higher resolution. The location of the sampling grid relative temperature distribution has a significant effect on the measured values. When imaging at 1.0 × 1.0 × 3.0 mm(3) resolution, the measured values for maximum temperature and volume dosed to 240 cumulative equivalent minutes (CEM) or greater varied by 17% and 33%, respectively, for the single-point heating case, and by 5% and 18%, respectively, for the 4-mm circle heating case. Accurate measurement of the maximum temperature required imaging at 1.0 × 1.0 × 3.0 mm(3) resolution for the single-point heating case and 2.0 × 2.0 × 5.0 mm(3) resolution for the 4-mm circle heating case. Copyright © 2010 Wiley-Liss, Inc.
Kelly-Cirino, C D; Musisi, E; Byanyima, P; Kaswabuli, S; Andama, A; Sessolo, A; Sanyu, I; Zawedde, J; Curry, P S; Huang, L
2017-06-01
OMNIgene·SPUTUM (OM-S) is a sample transport reagent designed to work with all tuberculosis diagnostics while eliminating the need for cold chain. OM-S-treated sputum samples were assayed in several tests after multiday holds. Raw sputa from 100 patients underwent direct smear microscopy, were manually split and assigned to the OM-S group [OM-S added at collection (no other processing required) and tested after 0- to 5-day holds at room temperature] or standard-of-care (SOC) group (NaOH/N-acetyl l-cysteine decontamination, all tested on day of collection). Concentrated smear microscopy, Lowenstein Jensen (LJ) culture, and mycobacteria growth indicator tube (MGIT) culture were performed. For patients with negative direct smear, a second sample was split, with SOC (raw sputum) and OM-S portions (sediment) tested in the Xpert MTB/RIF (Xpert) assay. OM-S group and SOC group results were strongly concordant on all four tests [range, 89% (MGIT)-97% (Xpert)]. OM-S MGIT, LJ, and Xpert tests were in statistical agreement with SOC MGIT as reference. OM-S specimens had lower culture contamination rates (3% vs. 10% LJ; 2% vs. 5% MGIT) but required, on average, 5.6 additional days to become MGIT-positive. The findings suggest that samples held/transported in OM-S are compatible with smear microscopy, LJ or MGIT culture, and Xpert, and perform comparably to fresh sputum samples. Larger feasibility studies are warranted. Copyright © 2017. Published by Elsevier Ltd.
Shih, Weichung Joe; Li, Gang; Wang, Yining
2016-03-01
Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one. Copyright © 2015 Elsevier Inc. All rights reserved.
Validation Test Report for the Automated Optical Processing System (AOPS) Version 4.8
2013-06-28
be familiar with UNIX; BASH shell programming; and remote sensing, particularly regarding computer processing of satellite data. The system memory ...and storage requirements are difficult to gauge. The amount of memory needed is dependent upon the amount and type of satellite data you wish to...process; the larger the area, the larger the memory requirement. For example, the entire Atlantic Ocean will require more processing power than the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wen, Haiming; Lin, Yaojun; Seidman, David N.
The preparation of transmission electron microcopy (TEM) samples from powders with particle sizes larger than ~100 nm poses a challenge. The existing methods are complicated and expensive, or have a low probability of success. Herein, we report a modified methodology for preparation of TEM samples from powders, which is efficient, cost-effective, and easy to perform. This method involves mixing powders with an epoxy on a piece of weighing paper, curing the powder–epoxy mixture to form a bulk material, grinding the bulk to obtain a thin foil, punching TEM discs from the foil, dimpling the discs, and ion milling the dimpledmore » discs to electron transparency. Compared with the well established and robust grinding–dimpling–ion-milling method for TEM sample preparation for bulk materials, our modified approach for preparing TEM samples from powders only requires two additional simple steps. In this article, step-by-step procedures for our methodology are described in detail, and important strategies to ensure success are elucidated. Furthermore, our methodology has been applied successfully for preparing TEM samples with large thin areas and high quality for many different mechanically milled metallic powders.« less
Wen, Haiming; Lin, Yaojun; Seidman, David N.; ...
2015-09-09
The preparation of transmission electron microcopy (TEM) samples from powders with particle sizes larger than ~100 nm poses a challenge. The existing methods are complicated and expensive, or have a low probability of success. Herein, we report a modified methodology for preparation of TEM samples from powders, which is efficient, cost-effective, and easy to perform. This method involves mixing powders with an epoxy on a piece of weighing paper, curing the powder–epoxy mixture to form a bulk material, grinding the bulk to obtain a thin foil, punching TEM discs from the foil, dimpling the discs, and ion milling the dimpledmore » discs to electron transparency. Compared with the well established and robust grinding–dimpling–ion-milling method for TEM sample preparation for bulk materials, our modified approach for preparing TEM samples from powders only requires two additional simple steps. In this article, step-by-step procedures for our methodology are described in detail, and important strategies to ensure success are elucidated. Furthermore, our methodology has been applied successfully for preparing TEM samples with large thin areas and high quality for many different mechanically milled metallic powders.« less
Quantum Devices Bonded Beneath a Superconducting Shield: Part 2
NASA Astrophysics Data System (ADS)
McRae, Corey Rae; Abdallah, Adel; Bejanin, Jeremy; Earnest, Carolyn; McConkey, Thomas; Pagel, Zachary; Mariantoni, Matteo
The next-generation quantum computer will rely on physical quantum bits (qubits) organized into arrays to form error-robust logical qubits. In the superconducting quantum circuit implementation, this architecture will require the use of larger and larger chip sizes. In order for on-chip superconducting quantum computers to be scalable, various issues found in large chips must be addressed, including the suppression of box modes (due to the sample holder) and the suppression of slot modes (due to fractured ground planes). By bonding a metallized shield layer over a superconducting circuit using thin-film indium as a bonding agent, we have demonstrated proof of concept of an extensible circuit architecture that holds the key to the suppression of spurious modes. Microwave characterization of shielded transmission lines and measurement of superconducting resonators were compared to identical unshielded devices. The elimination of box modes was investigated, as well as bond characteristics including bond homogeneity and the presence of a superconducting connection.
NASA Astrophysics Data System (ADS)
Murina, Ezequiel L.; Fernández-Prini, Roberto; Pastorino, Claudio
2017-08-01
We studied the behavior of long chain alkanes (LCAs) as they were transferred from gas to bulk water, through the liquid-vapor interface. These systems were studied using umbrella sampling molecular dynamics simulation and we have calculated properties like free energy profiles, molecular orientation, and radius of gyration of the LCA molecules. The results show changes in conformation of the solutes along the path. LCAs adopt pronounced molecular orientations and the larger ones extend appreciably when partially immersed in the interface. In bulk water, their conformations up to dodecane are mainly extended. However, larger alkanes like eicosane present a more stable collapsed conformation as they approach bulk water. We have characterized the more probable configurations in all interface and bulk regions. The results obtained are of interest for the study of biomatter processes requiring the transfer of hydrophobic matter, especially chain-like molecules like LCAs, from gas to bulk aqueous systems through the interface.
Body mass estimates of hominin fossils and the evolution of human body size.
Grabowski, Mark; Hatala, Kevin G; Jungers, William L; Richmond, Brian G
2015-08-01
Body size directly influences an animal's place in the natural world, including its energy requirements, home range size, relative brain size, locomotion, diet, life history, and behavior. Thus, an understanding of the biology of extinct organisms, including species in our own lineage, requires accurate estimates of body size. Since the last major review of hominin body size based on postcranial morphology over 20 years ago, new fossils have been discovered, species attributions have been clarified, and methods improved. Here, we present the most comprehensive and thoroughly vetted set of individual fossil hominin body mass predictions to date, and estimation equations based on a large (n = 220) sample of modern humans of known body masses. We also present species averages based exclusively on fossils with reliable taxonomic attributions, estimates of species averages by sex, and a metric for levels of sexual dimorphism. Finally, we identify individual traits that appear to be the most reliable for mass estimation for each fossil species, for use when only one measurement is available for a fossil. Our results show that many early hominins were generally smaller-bodied than previously thought, an outcome likely due to larger estimates in previous studies resulting from the use of large-bodied modern human reference samples. Current evidence indicates that modern human-like large size first appeared by at least 3-3.5 Ma in some Australopithecus afarensis individuals. Our results challenge an evolutionary model arguing that body size increased from Australopithecus to early Homo. Instead, we show that there is no reliable evidence that the body size of non-erectus early Homo differed from that of australopiths, and confirm that Homo erectus evolved larger average body size than earlier hominins. Copyright © 2015 Elsevier Ltd. All rights reserved.
Utilizing broadband X-rays in a Bragg coherent X-ray diffraction imaging experiment
Cha, Wonsuk; Liu, Wenjun; Harder, Ross; ...
2016-07-26
A method is presented to simplify Bragg coherent X-ray diffraction imaging studies of complex heterogeneous crystalline materials with a two-stage screening/imaging process that utilizes polychromatic and monochromatic coherent X-rays and is compatible with in situ sample environments. Coherent white-beam diffraction is used to identify an individual crystal particle or grain that displays desired properties within a larger population. A three-dimensional reciprocal-space map suitable for diffraction imaging is then measured for the Bragg peak of interest using a monochromatic beam energy scan that requires no sample motion, thus simplifyingin situchamber design. This approach was demonstrated with Au nanoparticles and will enable,more » for example, individual grains in a polycrystalline material of specific orientation to be selected, then imaged in three dimensions while under load.« less
Utilizing broadband X-rays in a Bragg coherent X-ray diffraction imaging experiment.
Cha, Wonsuk; Liu, Wenjun; Harder, Ross; Xu, Ruqing; Fuoss, Paul H; Hruszkewycz, Stephan O
2016-09-01
A method is presented to simplify Bragg coherent X-ray diffraction imaging studies of complex heterogeneous crystalline materials with a two-stage screening/imaging process that utilizes polychromatic and monochromatic coherent X-rays and is compatible with in situ sample environments. Coherent white-beam diffraction is used to identify an individual crystal particle or grain that displays desired properties within a larger population. A three-dimensional reciprocal-space map suitable for diffraction imaging is then measured for the Bragg peak of interest using a monochromatic beam energy scan that requires no sample motion, thus simplifying in situ chamber design. This approach was demonstrated with Au nanoparticles and will enable, for example, individual grains in a polycrystalline material of specific orientation to be selected, then imaged in three dimensions while under load.
Effective radium-226 concentration in meteorites
NASA Astrophysics Data System (ADS)
Girault, Frédéric; Perrier, Frédéric; Moreira, Manuel; Zanda, Brigitte; Rochette, Pierre; Teitler, Yoram
2017-07-01
The analysis of noble gases in meteorites provides constraints on the early solar system and the pre-solar nebula. This requires a better characterization and understanding of the capture, production, and release of noble gases in meteorites. The knowledge of transfer properties of noble gases for each individual meteorite could benefit from using radon-222, radioactive daughter of radium-226. The radon-222 emanating power is commonly quantified by the effective radium-226 concentration (ECRa), the product of the bulk radium-226 concentration and of the emanation coefficient E, which represents the probability of one decaying radium-226 to inject one radon-222 into the free porous network. Owing to a non-destructive, high-sensitivity accumulation method based on long photomultiplier counting sessions, we are now able to measure ECRa of meteorite samples, which usually have mass smaller than 15 g and ECRa < 0.5 Bq kg-1. We report here the results obtained from 41 different meteorites, based on 129 measurements on 70 samples using two variants of our method, showing satisfactory repeatability and a detection limit below 10-2 Bq kg-1 for a sample mass of 1 g. While two meteorites remain below detection level, we obtain for 39 meteorites heterogeneous ECRa values with mean (min-max range) of ca. 0.1 (0.018-1.30) Bq kg-1. Carbonaceous chondrites exhibit the largest ECRa values and eucrites the smallest. Such values are smaller than typical values from most terrestrial rocks, but comparable with those from Archean rocks (mean of ca. 0.18 Bq kg-1), an end-member of terrestrial rocks. Using uranium concentration from the literature, E is inferred from ECRa for all the meteorite samples. Values of E for meteorites (mean 40 ± 4%) are higher than E values for Archean rocks and reported values for lunar and Martian soils. Exceptionally large E values likely suggest that the 238U-226Ra pair would not be at equilibrium in most meteorites and that uranium and/or radium are most likely not uniformly distributed. ECRa of meteorites is correlated with E and seems to mainly reflect the gas permeability of the meteorite, which could be one important property, preserved in the meteorite, of its parent body, characterizing its history in space, possibly modified by alteration, shock metamorphism, and eventually weathering on Earth. Larger radon emanation values are associated with larger concentrations of the heaviest noble gases (argon, krypton, xenon), and larger 20Ne/22Ne and 36Ar/38Ar ratios, suggesting Earth's atmosphere contamination or solar wind implantation, and probably a similar carrier phase such as Q phase. An unclear correlation is observed with 40Ar, which may rule out a purely radiogenic effect on radon emanation. Thus, larger radon emanation suggests a larger capacity of collecting solar and terrestrial gases, which should imply higher loss of gases generated in the meteorite and larger dispersion of Pb/U ratios for age determination. This study provides the first quantification of natural radon-222 loss from meteorites and opens promising perspectives to quantify the relationship between pore space connectivity and the transfer properties for noble gases in meteorites and other extraterrestrial bodies.
Eddy Covariance Measurements of the Sea-Spray Aerosol Flu
NASA Astrophysics Data System (ADS)
Brooks, I. M.; Norris, S. J.; Yelland, M. J.; Pascal, R. W.; Prytherch, J.
2015-12-01
Historically, almost all estimates of the sea-spray aerosol source flux have been inferred through various indirect methods. Direct estimates via eddy covariance have been attempted by only a handful of studies, most of which measured only the total number flux, or achieved rather coarse size segregation. Applying eddy covariance to the measurement of sea-spray fluxes is challenging: most instrumentation must be located in a laboratory space requiring long sample lines to an inlet collocated with a sonic anemometer; however, larger particles are easily lost to the walls of the sample line. Marine particle concentrations are generally low, requiring a high sample volume to achieve adequate statistics. The highly hygroscopic nature of sea salt means particles change size rapidly with fluctuations in relative humidity; this introduces an apparent bias in flux measurements if particles are sized at ambient humidity. The Compact Lightweight Aerosol Spectrometer Probe (CLASP) was developed specifically to make high rate measurements of aerosol size distributions for use in eddy covariance measurements, and the instrument and data processing and analysis techniques have been refined over the course of several projects. Here we will review some of the issues and limitations related to making eddy covariance measurements of the sea spray source flux over the open ocean, summarise some key results from the last decade, and present new results from a 3-year long ship-based measurement campaign as part of the WAGES project. Finally we will consider requirements for future progress.
Clinical evaluation of a miniaturized desktop breath hydrogen analyzer.
Duan, L P; Braden, B; Clement, T; Caspary, W F; Lembcke, B
1994-10-01
A small desktop electrochemical H2 analyzer (EC-60-Hydrogen monitor) was compared with a stationary electrochemical H2 monitor (GMI-exhaled Hydrogen monitor). The EC-60-H2 monitor shows a high degree of precision for repetitive (n = 10) measurements of standard hydrogen mixtures (CV 1-8%). The response time for completion of measurement is shorter than that of the GMI-exhaled H2 monitor (37 sec. vs 53 sec.; p < 0.0001), while reset times are almost identical (54 sec. vs 51 sec. n.s). In a clinical setting, breath H2-concentrations measured with the EC-60-H2 monitor and the GMI-exhaled H2 monitor were in excellent agreement with a linear correlation (Y = 1.12X + 1.022, r2 = 0.9617, n = 115). With increasing H2-concentrations the EC-60-H2 monitor required larger sample volumes for maintaining sufficient precision, and sample volumes greater than 200 ml were required with H2-concentrations > 30 ppm. For routine gastrointestinal function testing, the EC-60-H2 monitor is an satisfactory and reliable, easy to use and inexpensive desktop breath hydrogen analyzer, whereas in patients with difficulty in cooperating (children, people with severe pulmonary insufficiency), special care has to be applied to obtain sufficiently large breath samples.
How many stakes are required to measure the mass balance of a glacier?
Fountain, A.G.; Vecchia, A.
1999-01-01
Glacier mass balance is estimated for South Cascade Glacier and Maclure Glacier using a one-dimensional regression of mass balance with altitude as an alternative to the traditional approach of contouring mass balance values. One attractive feature of regression is that it can be applied to sparse data sets where contouring is not possible and can provide an objective error of the resulting estimate. Regression methods yielded mass balance values equivalent to contouring methods. The effect of the number of mass balance measurements on the final value for the glacier showed that sample sizes as small as five stakes provided reasonable estimates, although the error estimates were greater than for larger sample sizes. Different spatial patterns of measurement locations showed no appreciable influence on the final value as long as different surface altitudes were intermittently sampled over the altitude range of the glacier. Two different regression equations were examined, a quadratic, and a piecewise linear spline, and comparison of results showed little sensitivity to the type of equation. These results point to the dominant effect of the gradient of mass balance with altitude of alpine glaciers compared to transverse variations. The number of mass balance measurements required to determine the glacier balance appears to be scale invariant for small glaciers and five to ten stakes are sufficient.
NASA Astrophysics Data System (ADS)
Tiret, O.; Salucci, P.; Bernardi, M.; Maraston, C.; Pforr, J.
2011-03-01
We analyse a sample of 23 supermassive elliptical galaxies (central velocity dispersion larger than 330 km s-1) drawn from the Sloan Digital Sky Survey. For each object, we estimate the dynamical mass from the light profile and central velocity dispersion, and compare it with the stellar mass derived from stellar population models. We show that these galaxies are dominated by luminous matter within the radius for which the velocity dispersion is measured. We find that the sizes and stellar masses are tightly correlated, with Re∝M1.1*, making the mean density within the de Vaucouleurs radius a steeply declining function of M*: ρe∝M-2.2*. These scalings are easily derived from the virial theorem if one recalls that this sample has essentially fixed (but large) σ0. In contrast, the mean density within 1 kpc is almost independent of M*, at a value that is in good agreement with recent studies of z˜ 2 galaxies. The fact that the mass within 1 kpc has remained approximately unchanged suggests assembly histories that were dominated by minor mergers - but we discuss why this is not the unique way to achieve this. Moreover, the total stellar mass of the objects in our sample is typically a factor of ˜5 larger than that in the high-redshift (z˜ 2) sample, an amount which seems difficult to achieve. If our galaxies are the evolved objects of the recent high-redshift studies, then we suggest that major mergers are required at z≳ 1.5 and that minor mergers become the dominant growth mechanism for massive galaxies at z≲ 1.5.
Dielectric studies on PEG-LTMS based polymer composites
NASA Astrophysics Data System (ADS)
Patil, Ravikumar V.; Praveen, D.; Damle, R.
2018-02-01
PEG LTMS based polymer composites were prepared and studied for dielectric constant variation with frequency and temperature as a potential candidate with better dielectric properties. Solution cast technique is used for the preparation of polymer composite with five different compositions. Samples show variation in dielectric constant with frequency and temperature. Dielectric constant is large at low frequencies and higher temperatures. Samples with larger space charges have shown larger dielectric constant. The highest dielectric constant observed was about 29244 for PEG25LTMS sample at 100Hz and 312 K.
Krempa, Heather M.
2015-10-29
Relative percent differences between methods were greater than 10 percent for most analyzed trace elements. Barium, cobalt, manganese, and boron had concentrations that were significantly different between sampling methods. Barium, molybdenum, boron, and uranium method concentrations indicate a close association between pump and grab samples based on bivariate plots and simple linear regressions. Grab sample concentrations were generally larger than pump concentrations for these elements and may be because of using a larger pore sized filter for grab samples. Analysis of zinc blank samples suggests zinc contamination in filtered grab samples. Variations of analyzed trace elements between pump and grab samples could reduce the ability to monitor temporal changes and potential groundwater contamination threats. The degree of precision necessary for monitoring potential groundwater threats and application objectives need to be considered when determining acceptable variation amounts.
Health sciences librarians' attitudes toward the Academy of Health Information Professionals
Baker, Lynda M.; Kars, Marge; Petty, Janet
2004-01-01
Objectives: The purpose of the study was to ascertain health sciences librarians' attitudes toward the Academy of Health Information Professionals (AHIP). Sample: Systematic sampling was used to select 210 names from the list of members of the Midwest Chapter of the Medical Library Association. Methods: A questionnaire containing open- and closed-ended questions was used to collect the data. Results: A total of 135 usable questionnaires were returned. Of the respondents, 34.8% are members of the academy and most are at the senior or distinguished member levels. The academy gives them a sense of professionalism and helps them to keep current with new trends. The majority of participants (65.2%) are not members of the academy. Among the various reasons proffered are that neither institutions nor employers require it and that there is no obvious benefit to belonging to the academy. Conclusions: More research needs to be done with a larger sample size to determine the attitudes of health sciences librarians, nationwide, toward the academy. PMID:15243638
Sadri, Shalane K; McEvoy, Peter M; Egan, Sarah J; Kane, Robert T; Rees, Clare S; Anderson, Rebecca A
2017-09-01
The evidence regarding whether co-morbid obsessive compulsive personality disorder (OCPD) is associated with treatment outcomes in obsessive compulsive disorder (OCD) is mixed, with some research indicating that OCPD is associated with poorer response, and some showing that it is associated with improved response. We sought to explore the role of OCPD diagnosis and the personality domain of conscientiousness on treatment outcomes for exposure and response prevention for OCD. The impact of co-morbid OCPD and conscientiousness on treatment outcomes was examined in a clinical sample of 46 participants with OCD. OCPD diagnosis and scores on conscientiousness were not associated with poorer post-treatment OCD severity, as indexed by Yale-Brown Obsessive Compulsive Scale (YBOCS) scores, although the relative sample size of OCPD was small and thus generalizability is limited. This study found no evidence that OCPD or conscientiousness were associated with treatment outcomes for OCD. Further research with larger clinical samples is required.
Class III dento-skeletal anomalies: rotational growth and treatment timing.
Mosca, G; Grippaudo, C; Marchionni, P; Deli, R
2006-03-01
The interception of a Class III malocclusion requires a long-term growth prediction in order to estimate the subject's evolution from the prepubertal phase to adulthood. The aim of this retrospective longitudinal study was to highlight the differences in facial morphology in relation to the direction of mandibular growth in a sample of subjects with Class III skeletal anomalies divided on the basis of their Petrovic's auxological categories and rotational types. The study involved 20 patients (11 females and 9 males) who started therapy before reaching their pubertal peak and were followed up for a mean of 4.3 years (range: 3.9-5.5 years). Despite the small sample size, the definition of the rotational type of growth was the main diagnostic element for setting the correct individualised therapy. We therefore believe that the observation of a larger sample would reinforce the diagnostic-therapeutic validity of Petrovic's auxological categories, allow an evaluation off all rotational types, and improve the statistical significance of the results obtained.
The distribution of galaxies within the 'Great Wall'
NASA Technical Reports Server (NTRS)
Ramella, Massimo; Geller, Margaret J.; Huchra, John P.
1992-01-01
The galaxy distribution within the 'Great Wall', the most striking feature in the first three 'slices' of the CfA redshift survey extension is examined. The Great Wall is extracted from the sample and is analyzed by counting galaxies in cells. The 'local' two-point correlation function within the Great Wall is computed and the local correlation length, is estimated 15/h Mpc, about 3 times larger than the correlation length for the entire sample. The redshift distribution of galaxies in the pencil-beam survey by Broadhurst et al. (1990) shows peaks separated about by large 'voids', at least to a redshift of about 0.3. The peaks might represent the intersections of their about 5/h Mpc pencil beams with structures similar to the Great Wall. Under this hypothesis, sampling of the Great Walls shows that l approximately 12/h Mpc is the minimum projected beam size required to detect all the 'walls' at redshifts between the peak of the selection function and the effective depth of the survey.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Fei; Daymond, Mark R., E-mail: mark.daymond@queensu.ca; Yao, Zhongwen
Thin foil dog bone samples prepared from a hot rolled Zr-2.5Nb alloy have been deformed by tensile deformation to different plastic strains. The development of slip traces during loading was observed in situ through SEM, revealing that deformation starts preferentially in certain sets of grains during the elastic-plastic transition region. TEM characterization showed that sub-grain boundaries formed during hot rolling consisted of screw 〈a〉 dislocations or screw 〈c〉 and 〈a〉 dislocations. Prismatic 〈a〉 dislocations with large screw or edge components have been identified from the sample with 0.5% plastic strain. Basal 〈a〉 and pyramidal 〈c + a〉 dislocations were found in themore » sample that had been deformed with 1.5% plastic strain, implying that these dislocations require larger stresses to be activated.« less
Oxygen-induced high diffusion rate of magnesium dopants in GaN/AlGaN based UV LED heterostructures.
Michałowski, Paweł Piotr; Złotnik, Sebastian; Sitek, Jakub; Rosiński, Krzysztof; Rudziński, Mariusz
2018-05-23
Further development of GaN/AlGaN based optoelectronic devices requires optimization of the p-type material growth process. In particular, uncontrolled diffusion of Mg dopants may decrease the performance of a device. Thus it is meaningful to study the behavior of Mg and the origins of its diffusion in detail. In this work we have employed secondary ion mass spectrometry to study the diffusion of magnesium in GaN/AlGaN structures. We show that magnesium has a strong tendency to form Mg-H complexes which immobilize Mg atoms and restrain their diffusion. However, these complexes are not present in samples post-growth annealed in an oxygen atmosphere or Al-rich AlGaN structures which naturally have a high oxygen concentration. In these samples, more Mg atoms are free to diffuse and thus the average diffusion length is considerably larger than for a sample annealed in an inert atmosphere.
Shape accuracy requirements on starshades for large and small apertures
NASA Astrophysics Data System (ADS)
Shaklan, Stuart B.; Marchen, Luis; Cady, Eric
2017-09-01
Starshades have been designed to work with large and small telescopes alike. With smaller telescopes, the targets tend to be brighter and closer to the Solar System, and their putative planetary systems span angles that require starshades with radii of 10-30 m at distances of 10s of Mm. With larger apertures, the light-collecting power enables studies of more numerous, fainter systems, requiring larger, more distant starshades with radii >50 m at distances of 100s of Mm. Characterization using infrared wavelengths requires even larger starshades. A mitigating approach is to observe planets between the petals, where one can observe regions closer to the star but with reduced throughput and increased instrument scatter. We compare the starshade shape requirements, including petal shape, petal positioning, and other key terms, for the WFIRST 26m starshade and the HABEX 72 m starshade concepts, over a range of working angles and telescope sizes. We also compare starshades having rippled and smooth edges and show that their performance is nearly identical.
Code of Federal Regulations, 2010 CFR
2010-04-01
....259 Marks. (a) Required marks. Each container larger than four liters or each case used to remove wine... contents of each container larger than four liters or each case in wine gallons, or for containers larger than four liters or cases filled according to metric measure, the contents in liters. If wine is...
Teaching Self-Control to Small Groups of Dually Diagnosed Adults.
ERIC Educational Resources Information Center
Dixon, Mark R.; Holcomb, Sharon
2000-01-01
A study used a progressive delay procedure to teach self-control to six adults with mental retardation. At baseline, participants chose an immediate smaller reinforcer rather than a larger delayed reinforcer. Progressive increases in work requirements for gaining access to a larger reinforcer resulted in participants selecting larger delayed…
Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use
Arthur, Steve M.; Schwartz, Charles C.
1999-01-01
We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area <1%/additional location) and precise (CV < 50%). Although the radiotracking data appeared unbiased, except for the relationship between area and sample size, these data failed to indicate some areas that likely were important to bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.
Horowitz, Arthur J.; Clarke, Robin T.; Merten, Gustavo Henrique
2015-01-01
Since the 1970s, there has been both continuing and growing interest in developing accurate estimates of the annual fluvial transport (fluxes and loads) of suspended sediment and sediment-associated chemical constituents. This study provides an evaluation of the effects of manual sample numbers (from 4 to 12 year−1) and sample scheduling (random-based, calendar-based and hydrology-based) on the precision, bias and accuracy of annual suspended sediment flux estimates. The evaluation is based on data from selected US Geological Survey daily suspended sediment stations in the USA and covers basins ranging in area from just over 900 km2 to nearly 2 million km2 and annual suspended sediment fluxes ranging from about 4 Kt year−1 to about 200 Mt year−1. The results appear to indicate that there is a scale effect for random-based and calendar-based sampling schemes, with larger sample numbers required as basin size decreases. All the sampling schemes evaluated display some level of positive (overestimates) or negative (underestimates) bias. The study further indicates that hydrology-based sampling schemes are likely to generate the most accurate annual suspended sediment flux estimates with the fewest number of samples, regardless of basin size. This type of scheme seems most appropriate when the determination of suspended sediment concentrations, sediment-associated chemical concentrations, annual suspended sediment and annual suspended sediment-associated chemical fluxes only represent a few of the parameters of interest in multidisciplinary, multiparameter monitoring programmes. The results are just as applicable to the calibration of autosamplers/suspended sediment surrogates currently used to measure/estimate suspended sediment concentrations and ultimately, annual suspended sediment fluxes, because manual samples are required to adjust the sample data/measurements generated by these techniques so that they provide depth-integrated and cross-sectionally representative data.
Evaluating noninvasive genetic sampling techniques to estimate large carnivore abundance.
Mumma, Matthew A; Zieminski, Chris; Fuller, Todd K; Mahoney, Shane P; Waits, Lisette P
2015-09-01
Monitoring large carnivores is difficult because of intrinsically low densities and can be dangerous if physical capture is required. Noninvasive genetic sampling (NGS) is a safe and cost-effective alternative to physical capture. We evaluated the utility of two NGS methods (scat detection dogs and hair sampling) to obtain genetic samples for abundance estimation of coyotes, black bears and Canada lynx in three areas of Newfoundland, Canada. We calculated abundance estimates using program capwire, compared sampling costs, and the cost/sample for each method relative to species and study site, and performed simulations to determine the sampling intensity necessary to achieve abundance estimates with coefficients of variation (CV) of <10%. Scat sampling was effective for both coyotes and bears and hair snags effectively sampled bears in two of three study sites. Rub pads were ineffective in sampling coyotes and lynx. The precision of abundance estimates was dependent upon the number of captures/individual. Our simulations suggested that ~3.4 captures/individual will result in a < 10% CV for abundance estimates when populations are small (23-39), but fewer captures/individual may be sufficient for larger populations. We found scat sampling was more cost-effective for sampling multiple species, but suggest that hair sampling may be less expensive at study sites with limited road access for bears. Given the dependence of sampling scheme on species and study site, the optimal sampling scheme is likely to be study-specific warranting pilot studies in most circumstances. © 2015 John Wiley & Sons Ltd.
Chen, I-Jen; Foloppe, Nicolas
2013-12-15
Computational conformational sampling underpins much of molecular modeling and design in pharmaceutical work. The sampling of smaller drug-like compounds has been an active area of research. However, few studies have tested in details the sampling of larger more flexible compounds, which are also relevant to drug discovery, including therapeutic peptides, macrocycles, and inhibitors of protein-protein interactions. Here, we investigate extensively mainstream conformational sampling methods on three carefully curated compound sets, namely the 'Drug-like', larger 'Flexible', and 'Macrocycle' compounds. These test molecules are chemically diverse with reliable X-ray protein-bound bioactive structures. The compared sampling methods include Stochastic Search and the recent LowModeMD from MOE, all the low-mode based approaches from MacroModel, and MD/LLMOD recently developed for macrocycles. In addition to default settings, key parameters of the sampling protocols were explored. The performance of the computational protocols was assessed via (i) the reproduction of the X-ray bioactive structures, (ii) the size, coverage and diversity of the output conformational ensembles, (iii) the compactness/extendedness of the conformers, and (iv) the ability to locate the global energy minimum. The influence of the stochastic nature of the searches on the results was also examined. Much better results were obtained by adopting search parameters enhanced over the default settings, while maintaining computational tractability. In MOE, the recent LowModeMD emerged as the method of choice. Mixed torsional/low-mode from MacroModel performed as well as LowModeMD, and MD/LLMOD performed well for macrocycles. The low-mode based approaches yielded very encouraging results with the flexible and macrocycle sets. Thus, one can productively tackle the computational conformational search of larger flexible compounds for drug discovery, including macrocycles. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Proctor, B.; Mitchell, T. M.; Hirth, G.; Goldsby, D. L.; Di Toro, G.; Zorzi, F.
2013-12-01
High-velocity friction (HVF) experiments on bare rock surfaces have revealed various dynamic weakening processes (e.g., flash weakening, gel weakening, melt lubrication) that likely play a fundamental role in coseismic fault weakening. However, faults generally contain a thin layer of gouge separating the solid wallrocks, thus it is important to understand how the presence of gouge modifies the efficiency of these weakening processes at seismic slip rates. We explored the frictional behavior of bare surfaces and powdered samples of an antigorite-rich serpentinite (ARS) and a lizardite-rich serpentinite (LRS) at earthquake slip rates. HVF experiments were conducted with slip displacements ranging from ~0.5 to 2m, at velocities ranging from 0.002m/s to 6.5 m/s, and with normal stresses ranging from 2-22 MPa for gouge and 5-100MPa for bare surfaces. Our results demonstrate that the friction coefficient (μ) of powdered serpentine is significantly larger than that of bare surfaces under otherwise identical conditions. Bare surface friction decreases over a weakening distance of a few centimeters to a nominally steady-state value of ~0.1 at velocities greater than 0.1m/s. The nominal steady-state friction decreases non-linearly with increasing normal stress from 0.14 to 0.045 at 5 and ~100MPa respectfully at a slip velocity of 1m/s. Additionally, the recovery of frictional strength during deceleration depends on total displacement; samples slipped for ~50mm recover faster than samples slipped for ~0.5m. Microstructural analysis of bare surfaces deformed at the highest normal stresses revealed translucent glass-like material on the slip surfaces and XRD analysis of wear material revealed an increasing presence of olivine and enstatite with increasing normal stress. In contrast, gouge requires an order of magnitude higher velocity than bare surfaces to induce frictional weakening, has a larger weakening distance and higher steady state friction values for equivalent deformation conditions. Furthermore, we observe a strong normal stress dependence of the nominal steady state friction and the weakening distance of ARS and LRS gouge from 0.51 to 0.39 and from 25-10cm at 4MPa and 22MPa, respectfully, for at a slip velocity of 1m/s. Strain was localized onto a shear surface in the range of 100-300 microns wide in all gouge samples deformed at >10cm/s and XRD analyses revealed the presence of olivine and enstatite in samples with the most weakening and none in samples with no weakening. Our results indicate that dynamic weakening occurs in gouge at low normal stress in response to strain localization and shear heating of the slip surface. However, because more initial displacement is required to localize strain, weakening initiates at higher velocities and after larger weakening distances than bare surfaces. At higher normal stress, localization occurs after less displacement and the differences between gouge and bare-surface friction diminish; extrapolation of our data suggests that the behavior of serpentine gouge will approach that of bare surfaces at normal stresses ≥60 MPa.
Rabideau, Dustin J; Pei, Pamela P; Walensky, Rochelle P; Zheng, Amy; Parker, Robert A
2018-02-01
The expected value of sample information (EVSI) can help prioritize research but its application is hampered by computational infeasibility, especially for complex models. We investigated an approach by Strong and colleagues to estimate EVSI by applying generalized additive models (GAM) to results generated from a probabilistic sensitivity analysis (PSA). For 3 potential HIV prevention and treatment strategies, we estimated life expectancy and lifetime costs using the Cost-effectiveness of Preventing AIDS Complications (CEPAC) model, a complex patient-level microsimulation model of HIV progression. We fitted a GAM-a flexible regression model that estimates the functional form as part of the model fitting process-to the incremental net monetary benefits obtained from the CEPAC PSA. For each case study, we calculated the expected value of partial perfect information (EVPPI) using both the conventional nested Monte Carlo approach and the GAM approach. EVSI was calculated using the GAM approach. For all 3 case studies, the GAM approach consistently gave similar estimates of EVPPI compared with the conventional approach. The EVSI behaved as expected: it increased and converged to EVPPI for larger sample sizes. For each case study, generating the PSA results for the GAM approach required 3 to 4 days on a shared cluster, after which EVPPI and EVSI across a range of sample sizes were evaluated in minutes. The conventional approach required approximately 5 weeks for the EVPPI calculation alone. Estimating EVSI using the GAM approach with results from a PSA dramatically reduced the time required to conduct a computationally intense project, which would otherwise have been impractical. Using the GAM approach, we can efficiently provide policy makers with EVSI estimates, even for complex patient-level microsimulation models.
Iglesias, Alejandra; Nebot, Carolina; Vázquez, Beatriz I.; Coronel-Olivares, Claudia; Franco Abuín, Carlos M.; Cepeda, Alberto
2014-01-01
Drug residues are considered environmental contaminants, and their occurrence has recently become a matter of concern. Analytical methods and monitoring systems are therefore required to control the continuous input of these drug residues into the environment. This article presents a suitable HPLC-ESI-MS/MS method for the simultaneous extraction, detection and quantification of residues of 13 drugs (antimicrobials, glucocorticosteroids, anti-inflammatories, anti-hypertensives, anti-cancer drugs and triphenylmethane dyes) in surface water. A monitoring study with 549 water samples was carried out in northwestern Spain to detect the presence of drug residues over two sampling periods during 2010, 2011 and 2012. Samples were collected from rural areas with and without farming activity and from urban areas. The 13 analytes were detected, and 18% of the samples collected showed positive results for the presence of at least one analyte. More collection sites were located in rural areas than in urban areas. However, more positive samples with higher concentrations and a larger number of analytes were detected in samples collected from sites located after the discharge of a WWTP. Results indicated that the WWTPs seems to act as a concentration point. Positive samples were also detected at a site located near a drinking water treatment plant. PMID:24837665
Iglesias, Alejandra; Nebot, Carolina; Vázquez, Beatriz I; Coronel-Olivares, Claudia; Abuín, Carlos M Franco; Cepeda, Alberto
2014-05-15
Drug residues are considered environmental contaminants, and their occurrence has recently become a matter of concern. Analytical methods and monitoring systems are therefore required to control the continuous input of these drug residues into the environment. This article presents a suitable HPLC-ESI-MS/MS method for the simultaneous extraction, detection and quantification of residues of 13 drugs (antimicrobials, glucocorticosteroids, anti-inflammatories, anti-hypertensives, anti-cancer drugs and triphenylmethane dyes) in surface water. A monitoring study with 549 water samples was carried out in northwestern Spain to detect the presence of drug residues over two sampling periods during 2010, 2011 and 2012. Samples were collected from rural areas with and without farming activity and from urban areas. The 13 analytes were detected, and 18% of the samples collected showed positive results for the presence of at least one analyte. More collection sites were located in rural areas than in urban areas. However, more positive samples with higher concentrations and a larger number of analytes were detected in samples collected from sites located after the discharge of a WWTP. Results indicated that the WWTPs seems to act as a concentration point. Positive samples were also detected at a site located near a drinking water treatment plant.
Gaissmaier, Wolfgang; Giese, Helge; Galesic, Mirta; Garcia-Retamero, Rocio; Kasper, Juergen; Kleiter, Ingo; Meuth, Sven G; Köpke, Sascha; Heesen, Christoph
2018-01-01
A shared decision-making approach is suggested for multiple sclerosis (MS) patients. To properly evaluate benefits and risks of different treatment options accordingly, MS patients require sufficient numeracy - the ability to understand quantitative information. It is unknown whether MS affects numeracy. Therefore, we investigated whether patients' numeracy was impaired compared to a probabilistic national sample. As part of the larger prospective, observational, multicenter study PERCEPT, we assessed numeracy for a clinical study sample of German MS patients (N=725) with a standard test and compared them to a German probabilistic sample (N=1001), controlling for age, sex, and education. Within patients, we assessed whether disease variables (disease duration, disability, annual relapse rate, cognitive impairment) predicted numeracy beyond these demographics. MS patients showed a comparable level of numeracy as the probabilistic national sample (68.9% vs. 68.5% correct answers, P=0.831). In both samples, numeracy was higher for men and the highly educated. Disease variables did not predict numeracy beyond demographics within patients, and predictability was generally low. This sample of MS patients understood quantitative information on the same level as the general population. There is no reason to withhold quantitative information from MS patients. Copyright © 2017 Elsevier B.V. All rights reserved.
Functional Analysis of Metabolomics Data.
Chagoyen, Mónica; López-Ibáñez, Javier; Pazos, Florencio
2016-01-01
Metabolomics aims at characterizing the repertory of small chemical compounds in a biological sample. As it becomes more massive and larger sets of compounds are detected, a functional analysis is required to convert these raw lists of compounds into biological knowledge. The most common way of performing such analysis is "annotation enrichment analysis," also used in transcriptomics and proteomics. This approach extracts the annotations overrepresented in the set of chemical compounds arisen in a given experiment. Here, we describe the protocols for performing such analysis as well as for visualizing a set of compounds in different representations of the metabolic networks, in both cases using free accessible web tools.
NASA Astrophysics Data System (ADS)
Hayes, Brian
1994-12-01
Gleaning further clues to the structure of the universe will require larger data samples. To that end, a major new survey of the skies called the Sloan Digital Star Survey (SDSS), is in preparation. It will catalog some 50 million galaxies and about 70 million stars. A new 2.5 meter telescope to be erected at Apache Point Observatory in New Mexico will be dedicated to the survey. The telescope is not the key innovation that will make the survey possible. The crucial factor is the technology for digitally recording large numbers of images and spectra and for automating the analysis, recognition, and classification of those images and spectra. The methods to be used are discussed.
Natural fracture systems on planetary surfaces: Genetic classification and pattern randomness
NASA Technical Reports Server (NTRS)
Rossbacher, Lisa A.
1987-01-01
One method for classifying natural fracture systems is by fracture genesis. This approach involves the physics of the formation process, and it has been used most frequently in attempts to predict subsurface fractures and petroleum reservoir productivity. This classification system can also be applied to larger fracture systems on any planetary surface. One problem in applying this classification system to planetary surfaces is that it was developed for ralatively small-scale fractures that would influence porosity, particularly as observed in a core sample. Planetary studies also require consideration of large-scale fractures. Nevertheless, this system offers some valuable perspectives on fracture systems of any size.
Structure prediction of nanoclusters; a direct or a pre-screened search on the DFT energy landscape?
Farrow, M R; Chow, Y; Woodley, S M
2014-10-21
The atomic structure of inorganic nanoclusters obtained via a search for low lying minima on energy landscapes, or hypersurfaces, is reported for inorganic binary compounds: zinc oxide (ZnO)n, magnesium oxide (MgO)n, cadmium selenide (CdSe)n, and potassium fluoride (KF)n, where n = 1-12 formula units. The computational cost of each search is dominated by the effort to evaluate each sample point on the energy landscape and the number of required sample points. The effect of changing the balance between these two factors on the success of the search is investigated. The choice of sample points will also affect the number of required data points and therefore the efficiency of the search. Monte Carlo based global optimisation routines (evolutionary and stochastic quenching algorithms) within a new software package, viz. Knowledge Led Master Code (KLMC), are employed to search both directly and after pre-screening on the DFT energy landscape. Pre-screening includes structural relaxation to minimise a cheaper energy function - based on interatomic potentials - and is found to improve significantly the search efficiency, and typically reduces the number of DFT calculations required to locate the local minima by more than an order of magnitude. Although the choice of functional form is important, the approach is robust to small changes to the interatomic potential parameters. The computational cost of initial DFT calculations of each structure is reduced by employing Gaussian smearing to the electronic energy levels. Larger (KF)n nanoclusters are predicted to form cuboid cuts from the rock-salt phase, but also share many structural motifs with (MgO)n for smaller clusters. The transition from 2D rings to 3D (bubble, or fullerene-like) structures occur at a larger cluster size for (ZnO)n and (CdSe)n. Differences between the HOMO and LUMO energies, for all the compounds apart from KF, are in the visible region of the optical spectrum (2-3 eV); KF lies deep in the UV region at 5 eV and shows little variation. Extrapolating the electron affinities found for the clusters with respect to size results in the qualitatively correct work functions for the respective bulk materials.
Wang, Wendy Y; Foster, William A
2015-08-01
Beta diversity - the variation in species composition among spatially discrete communities - and sampling grain - the size of samples being compared - may alter our perspectives of diversity within and between landscapes before and after agricultural conversion. Such assumptions are usually based on point comparisons, which do not accurately capture actual differences in total diversity. Beta diversity is often not rigorously examined. We investigated the beta diversity of ground-foraging ant communities in fragmented oil palm and forest landscapes in Sabah, Malaysia, using diversity metrics transformed from Hill number equivalents to remove dependences on alpha diversity. We compared the beta diversities of oil palm and forest, across three hierarchically nested sampling grains. We found that oil palm and forest communities had a greater percentage of total shared species when larger samples were compared. Across all grains and disregarding relative abundances, there was higher beta diversity of all species among forest communities. However, there were higher beta diversities of common and very abundant (dominant) species in oil palm as compared to forests. Differences in beta diversities between oil palm and forest were greatest at the largest sampling grain. Larger sampling grains in oil palm may generate bigger species pools, increasing the probability of shared species with forest samples. Greater beta diversity of all species in forest may be attributed to rare species. Oil palm communities may be more heterogeneous in common and dominant species because of variable community assembly events. Rare and also common species are better captured at larger grains, boosting differences in beta diversity between larger samples of forest and oil palm communities. Although agricultural landscapes support a lower total diversity than natural forests, diversity especially of abundant species is still important for maintaining ecosystem stability. Diversity in agricultural landscapes may be greater than expected when beta diversity is accounted for at large spatial scales.
Sul, Woo Jun; Cole, James R.; Jesus, Ederson da C.; Wang, Qiong; Farris, Ryan J.; Fish, Jordan A.; Tiedje, James M.
2011-01-01
High-throughput sequencing of 16S rRNA genes has increased our understanding of microbial community structure, but now even higher-throughput methods to the Illumina scale allow the creation of much larger datasets with more samples and orders-of-magnitude more sequences that swamp current analytic methods. We developed a method capable of handling these larger datasets on the basis of assignment of sequences into an existing taxonomy using a supervised learning approach (taxonomy-supervised analysis). We compared this method with a commonly used clustering approach based on sequence similarity (taxonomy-unsupervised analysis). We sampled 211 different bacterial communities from various habitats and obtained ∼1.3 million 16S rRNA sequences spanning the V4 hypervariable region by pyrosequencing. Both methodologies gave similar ecological conclusions in that β-diversity measures calculated by using these two types of matrices were significantly correlated to each other, as were the ordination configurations and hierarchical clustering dendrograms. In addition, our taxonomy-supervised analyses were also highly correlated with phylogenetic methods, such as UniFrac. The taxonomy-supervised analysis has the advantages that it is not limited by the exhaustive computation required for the alignment and clustering necessary for the taxonomy-unsupervised analysis, is more tolerant of sequencing errors, and allows comparisons when sequences are from different regions of the 16S rRNA gene. With the tremendous expansion in 16S rRNA data acquisition underway, the taxonomy-supervised approach offers the potential to provide more rapid and extensive community comparisons across habitats and samples. PMID:21873204
NASA Astrophysics Data System (ADS)
Yano, Hajime; McKay, Christopher P.; Anbar, Ariel; Tsou, Peter
The recent report of possible water vapor plumes at Europa and Ceres, together with the well-known Enceladus plume containing water vapor, salt, ammonia, and organic molecules, suggests that sample return missions could evolve into a generic approach for outer Solar System exploration in the near future, especially for the benefit of astrobiology research. Sampling such plumes can be accomplished via fly-through mission designs, modeled after the successful Stardust mission to capture and return material from Comet Wild-2 and multiple, precise trajectory controls of the Cassini mission to fly through Enceladus’ plume. The proposed LIFE (Life Investigation For Enceladus) mission to Enceladus, which would sample organic molecules from the plume of that apparently habitable world, provides one example of the appealing scientific return of such missions. Beyond plumes, the upper atmosphere of Titan could also be sampled in this manner. The SCIM mission to Mars, also inspired by Stardust, would sample and return aerosol dust in the upper atmosphere of Mars and thus extends this concept even to other planetary bodies. Such missions share common design needs. In particular, they require large exposed sampler areas (or sampler arrays) that can be contained to the standards called for by international planetary protection protocols that COSPAR Planetary Protection Policy (PPP) recommends. Containment is also needed because these missions are driven by astrobiologically relevant science - including interest in organic molecules - which argues against heat sterilization that could destroy scientific value of samples. Sample containment is a daunting engineering challenge. Containment systems must be carefully designed to appropriate levels to satisfy the two top requirements: planetary protection policy and the preserving the scientific value of samples. Planning for Mars sample return tends to center on a hermetic seal specification (i.e., gas-tight against helium escape). While this is an ideal specification, it far exceeds the current PPP requirements for Category-V “restricted Earth return”, which typically center on a probability of escape of a biologically active particle (e.g., < 1 in 10 (6) chance of escape of particles > 50 nm diameter). Particles of this size (orders of magnitude larger than a helium atom) are not volatile and generally “sticky” toward surfaces; the mobility of viruses and biomolecules requires aerosolization. Thus, meeting the planetary protection challenge does not require hermetic seal. So far, only a handful of robotic missions accomplished deep space sample returns, i.e., Genesis, Stardust and Hayabusa. This year, Hayabusa-2 will be launched and OSIRIS-REx will follow in a few years. All of these missions are classified as “unrestricted Earth return” by the COSPAR PPP recommendation. Nevertheless, scientific requirements of organic contamination control have been implemented to all WBS regarding sampling mechanism and Earth return capsule of Hayabusa-2. While Genesis, Stardust and OSIRIS-REx capsules “breathe” terrestrial air as they re-enter Earth’s atmosphere, temporal “air-tight” design was already achieved by the Hayabusa-1 sample container using a double O-ring seal, and that for the Hayabusa-2 will retain noble gas and other released gas from returned solid samples using metal seal technology. After return, these gases can be collected through a filtered needle interface without opening the entire container lid. This expertise can be extended to meeting planetary protection requirements from “restricted return” targets. There are still some areas requiring new innovations, especially to assure contingency robustness in every phase of a return mission. These must be achieved by meeting both PPP and scientific requirements during initial design and WBS of the integrated sampling system including the Earth return capsule. It is also important to note that international communities in planetary protection, sample return science, and deep space engineering must meet to enable this game-changing opportunity of Outer Solar System exploration.
Smith, Allan Ben; King, Madeleine; Butow, Phyllis; Olver, Ian
2013-01-01
We aimed to compare data quality from online and postal questionnaires and to evaluate the practicality of these different questionnaire modes in a cancer sample. Participants in a study investigating the psychosocial sequelae of testicular cancer could choose to complete a postal or online version of the study questionnaire. Data quality was evaluated by assessing sources of nonobservational errors such as participant nonresponse, item nonresponse and sampling bias. Time taken and number of reminders required for questionnaire return were used as indicators of practicality. Participant nonresponse was significantly higher among participants who chose the postal questionnaire. The proportion of questionnaires with missing items and the mean number of missing items did not differ significantly by mode. A significantly larger proportion of tertiary-educated participants and managers/professionals completed the online questionnaire. There were no significant differences in age, relationship status, employment status, country of birth or language spoken by completion mode. Compared with postal questionnaires, online questionnaires were returned significantly more quickly and required significantly fewer reminders. These results demonstrate that online questionnaire completion can be offered in a cancer sample without compromising data quality. In fact, data quality from online questionnaires may be superior due to lower rates of participant nonresponse. Investigators should be aware of potential sampling bias created by more highly educated participants and managers/professionals choosing to complete online questionnaires. Besides this issue, online questionnaires offer an efficient method for collecting high-quality data, with faster return and fewer reminders. Copyright © 2011 John Wiley & Sons, Ltd.
Bramley, Kyle; Pisani, Margaret A.; Murphy, Terrence E.; Araujo, Katy; Homer, Robert; Puchalski, Jonathan
2016-01-01
Background EBUS-guided transbronchial needle aspiration (TBNA) is important in the evaluation of thoracic lymphadenopathy. Reliably providing excellent diagnostic yield for malignancy, its diagnosis of sarcoidosis is inconsistent. Furthermore, when larger “core” biopsy samples of malignant tissue are required, TBNA may not suffice. The primary objective of this study was to determine if the sequential use of TBNA and a novel technique called cautery-assisted transbronchial forceps biopsies (ca-TBFB) was safe. Secondary outcomes included sensitivity and successful acquisition of tissue. Methods Fifty unselected patients undergoing convex probe EBUS were prospectively enrolled. Under EBUS guidance, all lymph nodes ≥ 1 cm were sequentially biopsied using TBNA and ca-TBFB. Safety and sensitivity were assessed at the nodal level for 111 nodes. Results of each technique were also reported on a per-patient basis. Results There were no significant adverse events. In nodes determined to be malignant, TBNA provided higher sensitivity (100%) than ca-TBFB (78%). However, among nodes with granulomatous inflammation, ca-TBFB exhibited higher sensitivity (90%) than TBNA (33%). For analysis based on patients rather than nodes, 6 of the 31 patients with malignancy would have been missed or understaged if the diagnosis was based on samples obtained by ca-TBFB. On the other hand, 3 of 8 patients with sarcoidosis would have been missed if analysis was based only on TBNA samples. In some cases only ca-TBFB acquired sufficient tissue for the core samples needed in clinical trials of malignancy. Conclusions The sequential use of TBNA and ca-TBFB appears to be safe. The larger samples obtained from ca-TBFB increased its sensitivity to detect granulomatous disease and provided specimens for clinical trials of malignancy when needle biopsies were insufficient. For thoracic surgeons and advanced bronchoscopists, we advocate ca-TBFB as an alternative to TBNA in select clinical scenarios. PMID:26912301
Bramley, Kyle; Pisani, Margaret A; Murphy, Terrence E; Araujo, Katy L; Homer, Robert J; Puchalski, Jonathan T
2016-05-01
Endobronchial ultrasound (EBUS)-guided transbronchial needle aspiration (TBNA) is important in the evaluation of thoracic lymphadenopathy. Reliably providing excellent diagnostic yield for malignancy, its diagnosis of sarcoidosis is inconsistent. Furthermore, TBNA may not suffice when larger "core biopsy" samples of malignant tissue are required. The primary objective of this study was to determine if the sequential use of TBNA and a novel technique called cautery-assisted transbronchial forceps biopsy (ca-TBFB) was safe. Secondary outcomes included sensitivity and successful acquisition of tissue. The study prospectively enrolled 50 unselected patients undergoing convex-probe EBUS. All lymph nodes exceeding 1 cm were sequentially biopsied under EBUS guidance using TBNA and ca-TBFB. Safety and sensitivity were assessed at the nodal level for 111 nodes. Results of each technique were also reported for each patient. There were no significant adverse events. In nodes determined to be malignant, TBNA provided higher sensitivity (100%) than ca-TBFB (78%). However, among nodes with granulomatous inflammation, ca-TBFB exhibited higher sensitivity (90%) than TBNA (33%). On the one hand, for analysis based on patients rather than nodes, 6 of the 31 patients with malignancy would have been missed or understaged if the diagnosis were based on samples obtained by ca-TBFB. On the other hand, 3 of 8 patients with sarcoidosis would have been missed if analysis were based only on TBNA samples. In some patients, only ca-TBFB acquired sufficient tissue for the core samples needed in clinical trials of malignancy. The sequential use of TBNA and ca-TBFB appears to be safe. The larger samples obtained from ca-TBFB increased its sensitivity to detect granulomatous disease and provided adequate specimens for clinical trials of malignancy when specimens from needle biopsies were insufficient. For thoracic surgeons and advanced bronchoscopists, we advocate ca-TBFB as an alternative to TBNA in select clinical scenarios. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Sarsour, Khaled; Kalsekar, Anupama; Swindle, Ralph; Foley, Kathleen; Walsh, James K.
2011-01-01
Study Objectives: Insomnia is a chronic condition with significant burden on health care and productivity costs. Despite this recognized burden, very few studies have examined associations between insomnia severity and healthcare and productivity costs. Design: A retrospective study linking health claims data with a telephone survey of members of a health plan in the Midwestern region of the United States. Participants: The total healthcare costs study sample consisted of 2086 health plan members who completed the survey and who had complete health claims data. The productivity costs sample consisted of 1329 health plan members who worked for pay—a subset of the total healthcare costs sample. Measurements: Subjects' age, gender, demographic variables, comorbidities, and total health care costs were ascertained using health claims. Insomnia severity and lost productivity related variables were assessed using telephone interview. Results: Compared with the no insomnia group, mean total healthcare costs were 75% larger in the group with moderate and severe insomnia ($1323 vs. $757, P < 0.05). Compared with the no insomnia group, mean lost productivity costs were 72% larger in the moderate and severe insomnia group ($1739 vs. $1013, P < 0.001). Chronic medical comorbidities and psychiatric comorbidities were positively associated with health care cost. In contrast, psychiatric comorbidities were associated with lost productivity; while, medical comorbidities were not associated with lost productivity. Conclusions: Health care and lost productivity costs were consistently found to be greater in moderate and severe insomniacs compared with non-insomniacs. Factors associated with lost productivity and health care costs may be fundamentally different and may require different kinds of interventions. Future studies should focus on better understanding mechanisms linking insomnia to healthcare and productivity costs and to understanding whether developing targeted interventions will reduce these costs. Citation: Sarsour K; Kalsekar A; Swindle R; Foley K; Walsh JK. The association between insomnia severity and healthcare and productivity costs in a health plan sample. SLEEP 2011;34(4):443-450. PMID:21461322
Complex disease and phenotype mapping in the domestic dog
Hayward, Jessica J.; Castelhano, Marta G.; Oliveira, Kyle C.; Corey, Elizabeth; Balkman, Cheryl; Baxter, Tara L.; Casal, Margret L.; Center, Sharon A.; Fang, Meiying; Garrison, Susan J.; Kalla, Sara E.; Korniliev, Pavel; Kotlikoff, Michael I.; Moise, N. S.; Shannon, Laura M.; Simpson, Kenneth W.; Sutter, Nathan B.; Todhunter, Rory J.; Boyko, Adam R.
2016-01-01
The domestic dog is becoming an increasingly valuable model species in medical genetics, showing particular promise to advance our understanding of cancer and orthopaedic disease. Here we undertake the largest canine genome-wide association study to date, with a panel of over 4,200 dogs genotyped at 180,000 markers, to accelerate mapping efforts. For complex diseases, we identify loci significantly associated with hip dysplasia, elbow dysplasia, idiopathic epilepsy, lymphoma, mast cell tumour and granulomatous colitis; for morphological traits, we report three novel quantitative trait loci that influence body size and one that influences fur length and shedding. Using simulation studies, we show that modestly larger sample sizes and denser marker sets will be sufficient to identify most moderate- to large-effect complex disease loci. This proposed design will enable efficient mapping of canine complex diseases, most of which have human homologues, using far fewer samples than required in human studies. PMID:26795439
Annesley, T; Matz, K; Balogh, L; Clayton, L; Giacherio, D
1986-07-01
This liquid-chromatographic assay requires 0.2 to 0.5 mL of whole blood, avoids the use of diethyl ether, and consumes only 10 to 20% of the solvents used in prior methods. Sample preparation involves an acidic extraction with methyl-t-butyl ether, performed in a 13 X 100 mm disposable glass tube, then a short second extraction of the organic phase with sodium hydroxide. After evaporation of the methyl-t-butyl ether, chromatography is performed on an "Astec" 2.0-mm (i.d.) octyl column. We compared results by this procedure with those by use of earlier larger-scale extractions and their respective 4.6-mm (i.d.) columns; analytical recoveries of cyclosporins A and D were comparable with previous findings and results for patients' specimens were equivalent, but the microbore columns provided greatly increased resolution and sensitivity.
Markov chain sampling of the O(n) loop models on the infinite plane
NASA Astrophysics Data System (ADS)
Herdeiro, Victor
2017-07-01
A numerical method was recently proposed in Herdeiro and Doyon [Phys. Rev. E 94, 043322 (2016), 10.1103/PhysRevE.94.043322] showing a precise sampling of the infinite plane two-dimensional critical Ising model for finite lattice subsections. The present note extends the method to a larger class of models, namely the O(n) loop gas models for n ∈(1 ,2 ] . We argue that even though the Gibbs measure is nonlocal, it is factorizable on finite subsections when sufficient information on the loops touching the boundaries is stored. Our results attempt to show that provided an efficient Markov chain mixing algorithm and an improved discrete lattice dilation procedure the planar limit of the O(n) models can be numerically studied with efficiency similar to the Ising case. This confirms that scale invariance is the only requirement for the present numerical method to work.
Pearcy, Benjamin T D; McEvoy, Peter M; Roberts, Lynne D
2017-02-01
This study extends knowledge about the relationship of Internet Gaming Disorder (IGD) to other established mental disorders by exploring comorbidities with anxiety, depression, Attention Deficit Hyperactivity Disorder (ADHD), and obsessive compulsive disorder (OCD), and assessing whether IGD accounts for unique variance in distress and disability. An online survey was completed by a convenience sample that engages in Internet gaming (N = 404). Participants meeting criteria for IGD based on the Personal Internet Gaming Disorder Evaluation-9 (PIE-9) reported higher comorbidity with depression, OCD, ADHD, and anxiety compared with those who did not meet the IGD criteria. IGD explained a small proportion of unique variance in distress (1%) and disability (3%). IGD accounted for a larger proportion of unique variance in disability than anxiety and ADHD, and a similar proportion to depression. Replications with clinical samples using longitudinal designs and structured diagnostic interviews are required.
Thermal conductivity of graphene and graphite: collective excitations and mean free paths.
Fugallo, Giorgia; Cepellotti, Andrea; Paulatto, Lorenzo; Lazzeri, Michele; Marzari, Nicola; Mauri, Francesco
2014-11-12
We characterize the thermal conductivity of graphite, monolayer graphene, graphane, fluorographane, and bilayer graphene, solving exactly the Boltzmann transport equation for phonons, with phonon-phonon collision rates obtained from density functional perturbation theory. For graphite, the results are found to be in excellent agreement with experiments; notably, the thermal conductivity is 1 order of magnitude larger than what found by solving the Boltzmann equation in the single mode approximation, commonly used to describe heat transport. For graphene, we point out that a meaningful value of intrinsic thermal conductivity at room temperature can be obtained only for sample sizes of the order of 1 mm, something not considered previously. This unusual requirement is because collective phonon excitations, and not single phonons, are the main heat carriers in these materials; these excitations are characterized by mean free paths of the order of hundreds of micrometers. As a result, even Fourier's law becomes questionable in typical sample sizes, because its statistical nature makes it applicable only in the thermodynamic limit to systems larger than a few mean free paths. Finally, we discuss the effects of isotopic disorder, strain, and chemical functionalization on thermal performance. Only chemical functionalization is found to play an important role, decreasing the conductivity by a factor of 2 in hydrogenated graphene, and by 1 order of magnitude in fluorogenated graphene.
Rheological weakening due to phase mixing in olivine + orthopyroxene aggregates
NASA Astrophysics Data System (ADS)
Kohlstedt, D. L.; Tasaka, M.; Zimmerman, M. E.
2016-12-01
To understand the processes involved in rheological weakening due to phase mixing, we conducted torsion experiments on samples composed of iron-rich olivine + orthopyroxene. Samples with volume fractions of pyroxene of fpx= 0.1, 0.3, and 0.4 were deformed in torsion at a temperature of 1200°C and a confining pressure of 300 MPa using a gas-medium apparatus. The value of the stress exponent, n, decreases with increasing strain, γ, with the rate of decrease depending on fpx. In samples with larger amounts of pyroxene, fpx = 0.3 and 0.4, n decreases from n = 3.5 at lower strains of 1 ≤ γ ≤ 3 to n = 1.7 at higher strains of 24 ≤ γ ≤ 25. In contrast, the sample with fpx = 0.1, n = 3.5 at lower strain decreases only to n = 3.0 at higher strains. In samples with larger fpx, the value of p changes from p = 1 at lower strains to p = 3 at higher strains. Furthermore, Hansen et al. (2012) observed that n = 4.l and p = 0.7 in samples without pyroxene (fpx = 0) regardless of strain. For samples with larger fpx, these values of n and p indicate that the deformation mechanism changes with strain, whereas for samples with smaller fpxno change in mechanism occurs. The microstructures in our samples with larger amounts of pyroxene provide insight into the change in deformation mechanism identified from the experimental results. First, elongated olivine and pyroxene grains align sub-parallel to the shear direction with a strong crystallographic preferred orientation (CPO) in samples deformed to lower strains for which n = 3.5. Second, mixtures of small, rounded grains of both phases, with a nearly random CPO develop in samples deformed to higher strains that exhibit a smaller stress exponent and strain weakening. The microstructural development forming well-mixed fine-grained olivine-pyroxene aggregates can be explained by the diffusivity difference between Si, Me (= Fe or Mg), and O, such that transport of MeO is significantly faster than that of SiO2. These mechanical and associated microstructural properties provide important constraints for understanding rheological weakening and strain localization in upper mantle rocks.
Kuroe, Kazuto; Rosas, Antonio; Molleson, Theya
2004-04-01
The aim of this study was to analyse the effects of cranial base orientation on the morphology of the craniofacial system in human populations. Three geographically distant populations from Europe (72), Africa (48) and Asia (24) were chosen. Five angular and two linear variables from the cranial base component and six angular and six linear variables from the facial component based on two reference lines of the vertical posterior maxillary and Frankfort horizontal planes were measured. The European sample presented dolichofacial individuals with a larger face height and a smaller face depth derived from a raised cranial base and facial cranium orientation which tended to be similar to the Asian sample. The African sample presented brachyfacial individuals with a reduced face height and a larger face depth as a result of a lowered cranial base and facial cranium orientation. The Asian sample presented dolichofacial individuals with a larger face height and depth due to a raised cranial base and facial cranium orientation. The findings of this study suggest that cranial base orientation and posterior cranial base length appear to be valid discriminating factors between different human populations.
Downlinks for DBS - Design and engineering considerations
NASA Astrophysics Data System (ADS)
Blecker, M.; Martin, E. R.
1985-01-01
The subsystem interrelationships and design parameters choice procedures for a DBS downlink design are discussed from a business decisions point of view. The image quality is determined by customer satisfaction, which is translated to a required carrier/noise (C/N) ratio. The C/N ratio defines acceptable levels of signal fading, a subjective value which is modified by the demographics of the service area. Increasing the satellite on-board transmitting power to meet acceptable broadcast reliability places burdens on the start-up capitalization of the business. Larger receiving antennas in rural areas ameliorates some of the power requirements. The dish size, however, affects the labor costs of installation, but must be kept small enough to be used in heavily populated areas. The satellites must be built, as far as is possible, from off-the-shelf components to keep costs down. Design selections for a sample complete system are listed.
Wade, Rachael; Cartwright, Colleen; Shaw, Kelly
2015-06-01
This paper aims to report carers' perceptions of the impact of home telehealth on the provision of care and the sustainability of home telehealth use. This paper is reporting on a sample of 15 carers who were involved in the telehealth arm of a larger controlled trial. Carers primarily believed that telehealth helped to provide better care. None of the carers had organised, or planned to organise, ongoing telehealth monitoring beyond the study. The main reason given for non-sustained usage was the belief that the person they cared for no longer required, or would benefit from, the monitoring. As the person being cared for was a frail older person with multiple chronic diseases and a history of recent hospitalisation, the non-sustained usage of home telehealth by carers raises questions about what is needed to ensure sustainability of use; this requires further investigation. © 2014 AJA Inc.
The outcome of tactile touch on stress parameters in intensive care: a randomized controlled trial.
Henricson, Maria; Ersson, Anders; Määttä, Sylvia; Segesten, Kerstin; Berglund, Anna-Lena
2008-11-01
The study aimed to investigate the effects of a five-day tactile touch intervention in order to find new and unconventional measures to moderate the detrimental influence of patients' stressors during intensive care. The hypothesis was that tactile touch would decrease stress indicators such as anxiety, glucose metabolism, blood pressure, heart rate and requirements of sedative drugs and noradrenalin. A randomized controlled trial was undertaken with 44 patients, which were assigned either to tactile touch or standard treatment (a rest hour). Observations of the stress indicators were made before, during and after the intervention or standard treatment. The study showed that tactile touch led to significantly lower levels of anxiety. The circulatory parameters suggested increased circulatory stability indicated by a reduction in noradrenalin requirement. The results need to be further validated through studies with larger sample sizes.
On the efficacy of a computer-based program to teach visual Braille reading.
Scheithauer, Mindy C; Tiger, Jeffrey H; Miller, Sarah J
2013-01-01
Scheithauer and Tiger (2012) created an efficient computerized program that taught 4 sighted college students to select text letters when presented with visual depictions of Braille alphabetic characters and resulted in the emergence of some braille reading. The current study extended these results to a larger sample (n = 81) and compared the efficacy and efficiency of the instructional program using 2 different response modalities. One variation of the program required a response in a multiple-choice format, and the other variation required a keyed response. Both instructional programs resulted in increased braille letter identification and braille reading. These skills were maintained at a follow-up session 7 to 14 days later. The mean time needed to complete the program was 22.8 min across participants. Implications of these results for future research, as well as practical implications for teaching the braille alphabet, are discussed. © Society for the Experimental Analysis of Behavior.
Genotype Imputation with Millions of Reference Samples
Browning, Brian L.; Browning, Sharon R.
2016-01-01
We present a genotype imputation method that scales to millions of reference samples. The imputation method, based on the Li and Stephens model and implemented in Beagle v.4.1, is parallelized and memory efficient, making it well suited to multi-core computer processors. It achieves fast, accurate, and memory-efficient genotype imputation by restricting the probability model to markers that are genotyped in the target samples and by performing linear interpolation to impute ungenotyped variants. We compare Beagle v.4.1 with Impute2 and Minimac3 by using 1000 Genomes Project data, UK10K Project data, and simulated data. All three methods have similar accuracy but different memory requirements and different computation times. When imputing 10 Mb of sequence data from 50,000 reference samples, Beagle’s throughput was more than 100× greater than Impute2’s throughput on our computer servers. When imputing 10 Mb of sequence data from 200,000 reference samples in VCF format, Minimac3 consumed 26× more memory per computational thread and 15× more CPU time than Beagle. We demonstrate that Beagle v.4.1 scales to much larger reference panels by performing imputation from a simulated reference panel having 5 million samples and a mean marker density of one marker per four base pairs. PMID:26748515
Whale sharks target dense prey patches of sergestid shrimp off Tanzania
Rohner, Christoph A.; Armstrong, Amelia J.; Pierce, Simon J.; Prebble, Clare E. M.; Cagua, E. Fernando; Cochran, Jesse E. M.; Berumen, Michael L.; Richardson, Anthony J.
2015-01-01
Large planktivores require high-density prey patches to make feeding energetically viable. This is a major challenge for species living in tropical and subtropical seas, such as whale sharks Rhincodon typus. Here, we characterize zooplankton biomass, size structure and taxonomic composition from whale shark feeding events and background samples at Mafia Island, Tanzania. The majority of whale sharks were feeding (73%, 380 of 524 observations), with the most common behaviour being active surface feeding (87%). We used 20 samples collected from immediately adjacent to feeding sharks and an additional 202 background samples for comparison to show that plankton biomass was ∼10 times higher in patches where whale sharks were feeding (25 vs. 2.6 mg m−3). Taxonomic analyses of samples showed that the large sergestid Lucifer hanseni (∼10 mm) dominated while sharks were feeding, accounting for ∼50% of identified items, while copepods (<2 mm) dominated background samples. The size structure was skewed towards larger animals representative of L.hanseni in feeding samples. Thus, whale sharks at Mafia Island target patches of dense, large, zooplankton dominated by sergestids. Large planktivores, such as whale sharks, which generally inhabit warm oligotrophic waters, aggregate in areas where they can feed on dense prey to obtain sufficient energy. PMID:25814777
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amblard, A.; Riguccini, L.; Temi, P.
We compute the properties of a sample of 221 local, early-type galaxies with a spectral energy distribution (SED) modeling software, CIGALEMC. Concentrating on the star-forming (SF) activity and dust contents, we derive parameters such as the specific star formation rate (sSFR), the dust luminosity, dust mass, and temperature. In our sample, 52% is composed of elliptical (E) galaxies and 48% of lenticular (S0) galaxies. We find a larger proportion of S0 galaxies among galaxies with a large sSFR and large specific dust emission. The stronger activity of S0 galaxies is confirmed by larger dust masses. We investigate the relative proportionmore » of active galactic nuclei (AGNs) and SF galaxies in our sample using spectroscopic Sloan Digital Sky Survey data and near-infrared selection techniques, and find a larger proportion of AGN-dominated galaxies in the S0 sample than the E one. This could corroborate a scenario where blue galaxies evolve into red ellipticals by passing through an S0 AGN active period while quenching its star formation. Finally, we find a good agreement comparing our estimates with color indicators.« less
NASA Astrophysics Data System (ADS)
Han, Qi-Gang; Yang, Wen-Ke; Zhu, Pin-Wen; Ban, Qing-Chu; Yan, Ni; Zhang, Qiang
2013-07-01
In order to increase the maximum cell pressure of the cubic high pressure apparatus, we have developed a new structure of tungsten carbide cubic anvil (tapered cubic anvil), based on the principle of massive support and lateral support. Our results indicated that the tapered cubic anvil has some advantages. First, tapered cubic anvil can push the transfer rate of pressure well into the range above 36.37% compare to the conventional anvil. Second, the rate of failure crack decreases about 11.20% after the modification of the conventional anvil. Third, the limit of static high-pressure in the sample cell can be extended to 13 GPa, which can increase the maximum cell pressure about 73.3% than that of the conventional anvil. Fourth, the volume of sample cell compressed by tapered cubic anvils can be achieved to 14.13 mm3 (3 mm diameter × 2 mm long), which is three and six orders of magnitude larger than that of double-stage apparatus and diamond anvil cell, respectively. This work represents a relatively simple method for achieving higher pressures and larger sample cell.
NASA Technical Reports Server (NTRS)
Panzarella, Charles
2004-01-01
As humans prepare for the exploration of our solar system, there is a growing need for miniaturized medical and environmental diagnostic devices for use on spacecrafts, especially during long-duration space missions where size and power requirements are critical. In recent years, the biochip (or Lab-on-a-Chip) has emerged as a technology that might be able to satisfy this need. In generic terms, a biochip is a miniaturized microfluidic device analogous to the electronic microchip that ushered in the digital age. It consists of tiny microfluidic channels, pumps and valves that transport small amounts of sample fluids to biosensors that can perform a variety of tests on those fluids in near real time. It has the obvious advantages of being small, lightweight, requiring less sample fluids and reagents and being more sensitive and efficient than larger devices currently in use. Some of the desired space-based applications would be to provide smaller, more robust devices for analyzing blood, saliva and urine and for testing water and food supplies for the presence of harmful contaminants and microorganisms. Our group has undertaken the goal of adapting as well as improving upon current biochip technology for use in long-duration microgravity environments.
Semiautomated TaqMan PCR screening of GMO labelled samples for (unauthorised) GMOs.
Scholtens, Ingrid M J; Molenaar, Bonnie; van Hoof, Richard A; Zaaijer, Stephanie; Prins, Theo W; Kok, Esther J
2017-06-01
In most countries, systems are in place to analyse food products for the potential presence of genetically modified organisms (GMOs), to enforce labelling requirements and to screen for the potential presence of unauthorised GMOs. With the growing number of GMOs on the world market, a larger diversity of methods is required for informative analyses. In this paper, the specificity of an extended screening set consisting of 32 screening methods to identify different crop species (endogenous genes) and GMO elements was verified against 59 different GMO reference materials. In addition, a cost- and time-efficient strategy for DNA isolation, screening and identification is presented. A module for semiautomated analysis of the screening results and planning of subsequent event-specific tests for identification has been developed. The Excel-based module contains information on the experimentally verified specificity of the element methods and of the EU authorisation status of the GMO events. If a detected GMO element cannot be explained by any of the events as identified in the same sample, this may indicate the presence of an unknown unauthorised GMO that may not yet have been assessed for its safety for humans, animals or the environment.
NASA Technical Reports Server (NTRS)
Itoh, T.; Kubo, H.; Honda, H.; Tominaga, T.; Makide, Y.; Yakohata, A.; Sakai, H.
1985-01-01
Measurements of concentrations of chlorofluoromethanes (CFMs), carbon dioxide and carbon isotope ratio in stratospheric and tropospheric air by grab-sampling systems are reported. The balloon-borne grab-sampling system has been launched from Sanriku Balloon Center three times since 1981. It consists of: (1) six sampling cylinders, (2) eight motor driven values, (3) control and monitor circuits, and (4) pressurized housing. Particular consideration is paid to the problem of contamination. Strict requirements are placed on the choice of materials and components, construction methods, cleaning techniques, vacuum integrity, and sampling procedures. An aluminum pressurized housing and a 4-m long inlet line are employed to prevent the sampling air from contamination by outgassing of sampling and control devices. The sampling is performed during the descent of the system. Vertical profiles of mixing ratios of CF2Cl2, CFCl3 and CH4 are given. Mixing ratios of CF2Cl2 and CFCl3 in the stratosphere do not show the discernible effect of the increase of those in the ground level background, and decrease with altitude. Decreasing rate of CFCl3 is larger than that of CF2Cl2. CH4 mixing ratio, on the other hand, shows diffusive equilibrium, as the photodissociation cross section of CH4 is small and concentrations of OH radical and 0(sup I D) are low.
Wilson, Jordan L.; Schumacher, John G.; Burken, Joel G.
2014-01-01
In the past several years, the Missouri Department of Natural Resources has closed two popular public beaches, Grand Glaize Beach and Public Beach 1, at Lake of the Ozarks State Park in Osage Beach, Missouri when monitoring results exceeded the established Escherichia coli (E. coli) standard. As a result of the beach closures, the U.S. Geological Survey and Missouri University of Science and Technology, in cooperation with the Missouri Department of Natural Resources, led an investigation into the occurrence and origins of E. coli at Grand Glaize Beach and Public Beach 1. The study included the collection of more than 1,300 water, sediment, and fecal source samples between August 2011 and February 2013 from the two beaches and vicinity. Spatial and temporal patterns of E. coli concentrations in water and sediments combined with measurements of environmental variables, beach-use patterns, and Missouri Department of Natural Resources water-tracing results were used to identify possible sources of E. coli contamination at the two beaches and to corroborate microbial source tracking (MST) sampling efforts. Results from a 2011 reconnaissance sampling indicate that water samples from Grand Glaize Beach cove contained significantly larger E. coli concentrations than adjacent coves and were largest at sites at the upper end of Grand Glaize Beach cove, indicating a probable local source of E. coli contamination within the upper end of the cove. Results from an intensive sampling effort during 2012 indicated that E. coli concentrations in water samples at Grand Glaize Beach cove were significantly larger in ankle-deep water than waist-deep water, trended downward during the recreational season, significantly increased with an increase in the total number of bathers at the beach, and were largest during the middle of the day. Concentrations of E. coli in nearshore sediment (sediment near the shoreline) at Grand Glaize Beach were significantly larger in foreshore samples (samples collected above the shoreline) than in samples collected in ankle-deep water below the shoreline, significantly larger in the left and middle areas of the beach than the right area, and substantially larger than similar studies at E. coli- contaminated beaches on Lake Erie in Ohio. Concentrations of E. coli in the water column also were significantly larger after resuspension of sediments. Results of MST indicate a predominance of waterfowl-associated markers in nearshore sediments at Grand Glaize Beach consistent with frequent observations of goose and vulture fecal matter in sediment, especially on the left and middle areas of the beach. The combination of spatial and temporal sampling and MST indicate that an important source of E. coli contamination at Grand Glaize Beach during 2012 was E. coli released into the water column by bathers resuspending E. coli-contaminated sediments, especially during high-use days early in the recreational season.
NASA Technical Reports Server (NTRS)
Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.
2012-01-01
The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized forest AGB sampling errors by 15 - 38%. Furthermore, spaceborne global scale accuracy requirements were achieved. At least 80% of the grid cells at 100m, 250m, 500m, and 1km grid levels met AGB density accuracy requirements using a combination of passive optical and SAR along with machine learning methods to predict vegetation structure metrics for forested areas without LiDAR samples. Finally, using either passive optical or SAR, accuracy requirements were met at the 500m and 250m grid level, respectively.
Schultz, Christopher J.; Carey, Lawrence D.; Schultz, Elise V.; Blakeslee, Richard J.
2017-01-01
Thirty-nine thunderstorms are examined using multiple-Doppler, polarimetric and total lightning observations to understand the role of mixed phase kinematics and microphysics in the development of lightning jumps. This sample size is larger than those of previous studies on this topic. The principal result of this study is that lightning jumps are a result of mixed phase updraft intensification. Larger increases in intense updraft volume (≥ 10 m s−1) and larger changes in peak updraft speed are observed prior to lightning jump occurrence when compared to other non-jump increases in total flash rate. Wilcoxon-Mann-Whitney Rank Sum testing yields p-values ≤0.05, indicating statistical independence between lightning jump and non-jump distributions for these two parameters. Similar changes in mixed phase graupel mass magnitude are observed prior to lightning jumps and non-jump increases in total flash rate. The p-value for graupel mass change is p=0.096, so jump and non-jump distributions for graupel mass change are not found statistically independent using the p=0.05 significance level. Timing of updraft volume, speed and graupel mass increases are found to be 4 to 13 minutes in advance of lightning jump occurrence. Also, severe storms without lightning jumps lack robust mixed phase updrafts, demonstrating that mixed phase updrafts are not always a requirement for severe weather occurrence. Therefore, the results of this study show that lightning jump occurrences are coincident with larger increases in intense mixed phase updraft volume and peak updraft speed than smaller non-jump increases in total flash rate. PMID:29158622
Schultz, Christopher J; Carey, Lawrence D; Schultz, Elise V; Blakeslee, Richard J
2017-02-01
Thirty-nine thunderstorms are examined using multiple-Doppler, polarimetric and total lightning observations to understand the role of mixed phase kinematics and microphysics in the development of lightning jumps. This sample size is larger than those of previous studies on this topic. The principal result of this study is that lightning jumps are a result of mixed phase updraft intensification. Larger increases in intense updraft volume (≥ 10 m s -1 ) and larger changes in peak updraft speed are observed prior to lightning jump occurrence when compared to other non-jump increases in total flash rate. Wilcoxon-Mann-Whitney Rank Sum testing yields p-values ≤0.05, indicating statistical independence between lightning jump and non-jump distributions for these two parameters. Similar changes in mixed phase graupel mass magnitude are observed prior to lightning jumps and non-jump increases in total flash rate. The p-value for graupel mass change is p=0.096, so jump and non-jump distributions for graupel mass change are not found statistically independent using the p=0.05 significance level. Timing of updraft volume, speed and graupel mass increases are found to be 4 to 13 minutes in advance of lightning jump occurrence. Also, severe storms without lightning jumps lack robust mixed phase updrafts, demonstrating that mixed phase updrafts are not always a requirement for severe weather occurrence. Therefore, the results of this study show that lightning jump occurrences are coincident with larger increases in intense mixed phase updraft volume and peak updraft speed than smaller non-jump increases in total flash rate.
NASA Technical Reports Server (NTRS)
Schultz, Christopher J.; Carey, Lawrence D.; Schultz, Elise V.; Blakeslee, Richard J.
2017-01-01
Thirty-nine thunderstorms are examined using multiple-Doppler, polarimetric and total lightning observations to understand the role of mixed phase kinematics and microphysics in the development of lightning jumps. This sample size is larger than those of previous studies on this topic. The principal result of this study is that lightning jumps are a result of mixed phase updraft intensification. Larger increases in intense updraft volume greater than or equal to 10 m(sup -1) and larger changes in peak updraft speed are observed prior to lightning jump occurrence when compared to other non-jump increases in total ash rate. Wilcoxon-Mann-Whitney Rank Sum testing yields p-values 0.05, indicating statistical independence between lightning jump and non-jump distributions for these two parameters. Similar changes in mixed phase graupel mass magnitude are observed prior to lightning jumps and non-jump increases in total ash rate. The p-value for graupel mass change is p=0.096, so jump and non-jump distributions for graupel mass change are not found statistically independent using the p=0.05 significance level. Timing of updraft volume, speed and graupel mass increases are found to be 4 to 13 minutes in advance of lightning jump occurrence. Also, severe storms without lightning jumps lack robust mixed phase updrafts, demonstrating that mixed phase updrafts are not always a requirement for severe weather occurrence. Therefore, the results of this study show that lightning jump occurrences are coincident with larger increases in intense mixed phase updraft volume and peak updraft speed than smaller non-jump increases in total ash rate.
Feedback Augmented Sub-Ranging (FASR) Quantizer
NASA Technical Reports Server (NTRS)
Guilligan, Gerard
2012-01-01
This innovation is intended to reduce the size, power, and complexity of pipeline analog-to-digital converters (ADCs) that require high resolution and speed along with low power. Digitizers are important components in any application where analog signals (such as light, sound, temperature, etc.) need to be digitally processed. The innovation implements amplification of a sampled residual voltage in a switched capacitor amplifier stage that does not depend on charge redistribution. The result is less sensitive to capacitor mismatches that cause gain errors, which are the main limitation of such amplifiers in pipeline ADCs. The residual errors due to mismatch are reduced by at least a factor of 16, which is equivalent to at least 4 bits of improvement. The settling time is also faster because of a higher feedback factor. In traditional switched capacitor residue amplifiers, closed-loop amplification of a sampled and held residue signal is achieved by redistributing sampled charge onto a feedback capacitor around a high-gain transconductance amplifier. The residual charge that was sampled during the acquisition or sampling phase is stored on two or more capacitors, often equal in value or integral multiples of each other. During the hold or amplification phase, all of the charge is redistributed onto one capacitor in the feedback loop of the amplifier to produce an amplified voltage. The key error source is the non-ideal ratios of feedback and input capacitors caused by manufacturing tolerances, called mismatches. The mismatches cause non-ideal closed-loop gain, leading to higher differential non-linearity. Traditional solutions to the mismatch errors are to use larger capacitor values (than dictated by thermal noise requirements) and/or complex calibration schemes, both of which increase the die size and power dissipation. The key features of this innovation are (1) the elimination of the need for charge redistribution to achieve an accurate closed-loop gain of two, (2) a higher feedback factor in the amplifier stage giving a higher closed-loop bandwidth compared to the prior art, and (3) reduced requirement for calibration. The accuracy of the new amplifier is mainly limited by the sampling networks parasitic capacitances, which should be minimized in relation to the sampling capacitors.
Deployable Aeroshell Flexible Thermal Protection System Testing
NASA Technical Reports Server (NTRS)
Hughes, Stephen J.; Ware, Joanne S.; DelCorso, Joseph A.; Lugo, Rafael A.
2009-01-01
Deployable aeroshells offer the promise of achieving larger aeroshell surface areas for entry vehicles than otherwise attainable without deployment. With the larger surface area comes the ability to decelerate high-mass entry vehicles at relatively low ballistic coefficients. However, for an aeroshell to perform even at the low ballistic coefficients attainable with deployable aeroshells, a flexible thermal protection system (TPS) is required that is capable of surviving reasonably high heat flux and durable enough to survive the rigors of construction handling, high density packing, deployment, aerodynamic loading and aerothermal heating. The Program for the Advancement of Inflatable Decelerators for Atmospheric Entry (PAIDAE) is tasked with developing the technologies required to increase the technology readiness level (TRL) of inflatable deployable aeroshells, and one of several of the technologies PAIDAE is developing for use on inflatable aeroshells is flexible TPS. Several flexible TPS layups were designed, based on commercially available materials, and tested in NASA Langley Research Center's 8 Foot High Temperature Tunnel (8ft HTT). The TPS layups were designed for, and tested at three different conditions that are representative of conditions seen in entry simulation analyses of inflatable aeroshell concepts. Two conditions were produced in a single run with a sting-mounted dual wedge test fixture. The dual wedge test fixture had one row of sample mounting locations (forward) at about half the running length of the top surface of the wedge. At about two thirds of the running length of the wedge, a second test surface drafted up at five degrees relative to the first test surface established the remaining running length of the wedge test fixture. A second row of sample mounting locations (aft) was positioned in the middle of the running length of the second test surface. Once the desired flow conditions were established in the test section the dual wedge test fixture, oriented at 5 degrees angle of attack down, was injected into the flow. In this configuration the aft sample mounting location was subjected to roughly twice the heat flux and surface pressure of the forward mounting location. The tunnel was run at two different conditions for the test series: 1) 'Low Pressure', and 2) 'High Pressure'. At 'Low Pressure' conditions the TPS layups were tested at 6W/cm2 and 11W/cm2 while at 'High Pressure' conditions the TPS layups were tested at 11W/cm2 and 20W/cm2. This paper details the test configuration of the TPS samples in the 8Ft HTT, the sample holder assembly, TPS sample layup construction, sample instrumentation, results from this testing, as well as lessons learned.
Measuring ammonia concentrations and emissions from agricultural land and liquid surfaces: a review.
Shah, Sanjay B; Westerman, Philip W; Arogo, Jactone
2006-07-01
Aerial ammonia concentrations (Cg) are measured using acid scrubbers, filter packs, denuders, or optical methods. Using Cg and wind speed or airflow rate, ammonia emission rate or flux can be directly estimated using enclosures or micrometeorological methods. Using nitrogen (N) recovery is not recommended, mainly because the different gaseous N components cannot be separated. Although low cost and replicable, chambers modify environmental conditions and are suitable only for comparing treatments. Wind tunnels do not modify environmental conditions as much as chambers, but they may not be appropriate for determining ammonia fluxes; however, they can be used to compare emissions and test models. Larger wind tunnels that also simulate natural wind profiles may be more useful for comparing treatments than micrometeorological methods because the latter require larger plots and are, thus, difficult to replicate. For determining absolute ammonia flux, the micrometeorological methods are the most suitable because they are nonintrusive. For use with micrometeorological methods, both the passive denuders and optical methods give comparable accuracies, although the latter give real-time Cg but at a higher cost. The passive denuder is wind weighted and also costs less than forced-air Cg measurement methods, but it requires calibration. When ammonia contamination during sample preparation and handling is a concern and separating the gas-phase ammonia and aerosol ammonium is not required, the scrubber is preferred over the passive denuder. The photothermal interferometer, because of its low detection limit and robustness, may hold potential for use in agriculture, but it requires evaluation. With its simpler theoretical basis and fewer restrictions, the integrated horizontal flux (IHF) method is preferable over other micrometeorological methods, particularly for lagoons, where berms and land-lagoon boundaries modify wind flow and flux gradients. With uniform wind flow, the ZINST method requiring measurement at one predetermined height may perform comparably to the IHF method but at a lower cost.
A real-space approach to the X-ray phase problem
NASA Astrophysics Data System (ADS)
Liu, Xiangan
Over the past few decades, the phase problem of X-ray crystallography has been explored in reciprocal space in the so called direct methods . Here we investigate the problem using a real-space approach that bypasses the laborious procedure of frequent Fourier synthesis and peak picking. Starting from a completely random structure, we move the atoms around in real space to minimize a cost function. A Monte Carlo method named simulated annealing (SA) is employed to search the global minimum of the cost function which could be constructed in either real space or reciprocal space. In the hybrid minimal principle, we combine the dual space costs together. One part of the cost function monitors the probability distribution of the phase triplets, while the other is a real space cost function which represents the discrepancy between measured and calculated intensities. Compared to the single space cost functions, the dual space cost function has a greatly improved landscape and therefore could prevent the system from being trapped in metastable states. Thus, the structures of large molecules such as virginiamycin (C43H 49N7O10 · 3CH0OH), isoleucinomycin (C60H102N 6O18) and hexadecaisoleucinomycin (HEXIL) (C80H136 N8O24) can now be solved, whereas it would not be possible using the single cost function. When a molecule gets larger, the configurational space becomes larger, and the requirement of CPU time increases exponentially. The method of improved Monte Carlo sampling has demonstrated its capability to solve large molecular structures. The atoms are encouraged to sample the high density regions in space determined by an approximate density map which in turn is updated and modified by averaging and Fourier synthesis. This type of biased sampling has led to considerable reduction of the configurational space. It greatly improves the algorithm compared to the previous uniform sampling. Hence, for instance, 90% of computer run time could be cut in solving the complex structure of isoleucinomycin. Successful trial calculations include larger molecular structures such as HEXIL and a collagen-like peptide (PPG). Moving chemical fragment is proposed to reduce the degrees of freedom. Furthermore, stereochemical parameters are considered for geometric constraints and for a cost function related to chemical energy.
NASA Astrophysics Data System (ADS)
Zeeshan, M. A.; Esqué-de Los Ojos, D.; Castro-Hartmann, P.; Guerrero, M.; Nogués, J.; Suriñach, S.; Baró, M. D.; Nelson, B. J.; Pané, S.; Pellicer, E.; Sort, J.
2016-01-01
The effects of constrained sample dimensions on the mechanical behavior of crystalline materials have been extensively investigated. However, there is no clear understanding of these effects in nano-sized amorphous samples. Herein, nanoindentation together with finite element simulations are used to compare the properties of crystalline and glassy CoNi(Re)P electrodeposited nanowires (φ ~ 100 nm) with films (3 μm thick) of analogous composition and structure. The results reveal that amorphous nanowires exhibit a larger hardness, lower Young's modulus and higher plasticity index than glassy films. Conversely, the very large hardness and higher Young's modulus of crystalline nanowires are accompanied by a decrease in plasticity with respect to the homologous crystalline films. Remarkably, proper interpretation of the mechanical properties of the nanowires requires taking the curved geometry of the indented surface and sink-in effects into account. These findings are of high relevance for optimizing the performance of new, mechanically-robust, nanoscale materials for increasingly complex miniaturized devices.The effects of constrained sample dimensions on the mechanical behavior of crystalline materials have been extensively investigated. However, there is no clear understanding of these effects in nano-sized amorphous samples. Herein, nanoindentation together with finite element simulations are used to compare the properties of crystalline and glassy CoNi(Re)P electrodeposited nanowires (φ ~ 100 nm) with films (3 μm thick) of analogous composition and structure. The results reveal that amorphous nanowires exhibit a larger hardness, lower Young's modulus and higher plasticity index than glassy films. Conversely, the very large hardness and higher Young's modulus of crystalline nanowires are accompanied by a decrease in plasticity with respect to the homologous crystalline films. Remarkably, proper interpretation of the mechanical properties of the nanowires requires taking the curved geometry of the indented surface and sink-in effects into account. These findings are of high relevance for optimizing the performance of new, mechanically-robust, nanoscale materials for increasingly complex miniaturized devices. Electronic supplementary information (ESI) available: Additional details on experimental and analysis methods, additional results on crystalline CoNi(Re)P alloys and two movies to illustrate the stress distribution during deformation of the amorphous and crystalline nanowires. See DOI: 10.1039/c5nr04398k
The fluid dynamics of microjet explosions caused by extremely intense X-ray pulses
NASA Astrophysics Data System (ADS)
Stan, Claudiu; Laksmono, Hartawan; Sierra, Raymond; Milathianaki, Despina; Koglin, Jason; Messerschmidt, Marc; Williams, Garth; Demirci, Hasan; Botha, Sabine; Nass, Karol; Stone, Howard; Schlichting, Ilme; Shoeman, Robert; Boutet, Sebastien
2014-11-01
Femtosecond X-ray scattering experiments at free-electron laser facilities typically requires liquid jet delivery methods to bring samples to the region of interaction with X-rays. We have imaged optically the damage process in water microjets due to intense hard X-ray pulses at the Linac Coherent Light Source (LCLS), using time-resolved imaging techniques to record movies at rates up to half a billion frames per second. For pulse energies larger than a few percent of the maximum pulse energy available at LCLS, the X-rays deposit energies much larger than the latent heat of vaporization in water, and induce a phase explosion that opens a gap in the jet. The LCLS pulses last a few tens of femtoseconds, but the full evolution of the broken jet is orders of magnitude slower - typically in the microsecond range - due to complex fluid dynamics processes triggered by the phase explosion. Although the explosion results in a complex sequence of phenomena, they lead to an approximately self-similar flow of the liquid in the jet.
Ultrasonic imaging of textured alumina
NASA Technical Reports Server (NTRS)
Stang, David B.; Salem, Jonathan A.; Generazio, Edward R.
1989-01-01
Ultrasonic images representing the bulk attenuation and velocity of a set of alumina samples were obtained by a pulse-echo contact scanning technique. The samples were taken from larger bodies that were chemically similar but were processed by extrusion or isostatic processing. The crack growth resistance and fracture toughness of the larger bodies were found to vary with processing method and test orientation. The results presented here demonstrate that differences in texture that contribute to variations in structural performance can be revealed by analytic ultrasonic techniques.
NASA Astrophysics Data System (ADS)
Banerji, Manda; McMahon, Richard G.; Hewett, Paul C.; Alaghband-Zadeh, Susannah; Gonzalez-Solares, Eduardo; Venemans, Bram P.; Hawthorn, Melanie J.
2012-12-01
We present a new sample of purely near-infrared-selected KVega < 16.5 [KAB < 18.4] extremely red [(J - K)Vega > 2.5] quasar candidates at z ˜ 2 from ≃900 deg2 of data in the UKIDSS Large Area Survey (LAS). Five of these are spectroscopically confirmed to be heavily reddened type 1 active galactic nuclei (AGN) with broad emission lines bringing our total sample of reddened quasars from the UKIDSS-LAS to 12 at z = 1.4-2.7. At these redshifts, Hα (6563 Å) is in the K band. However, the mean Hα equivalent width of the reddened quasars is only 10 per cent larger than that of the optically selected population and cannot explain the extreme colours. Instead, dust extinction of AV ˜ 2-6 mag is required to reproduce the continuum colours of our sources. This is comparable to the dust extinctions seen in submillimetre galaxies at similar redshifts. We argue that the AGN are likely being observed in a relatively short-lived breakout phase when they are expelling gas and dust following a massive starburst, subsequently turning into UV-luminous quasars. Some of our quasars show direct evidence for strong outflows (v ˜ 800-1000 km s-1) affecting the Hα line consistent with this scenario. We predict that a larger fraction of reddened quasar hosts are likely to be submillimetre bright compared to the UV-luminous quasar population. We use our sample to place new constraints on the fraction of obscured type 1 AGN likely to be missed in optical surveys. Taken at face value our findings suggest that the obscured fraction depends on quasar luminosity. The space density of obscured quasars is approximately five times that inferred for UV-bright quasars from the Sloan Digital Sky Survey (SDSS) luminosity function at Mi < -30 but seems to drop at lower luminosities even accounting for various sources of incompleteness in our sample. We find that at Mi ˜ -28 for example, this fraction is unlikely to be larger than ˜20 per cent although these fractions are highly uncertain at present due to the small size of our sample. A deeper K-band survey for highly obscured quasars is clearly needed to test this hypothesis fully and is now becoming possible with new sensitive all-sky infrared surveys such as the VISTA Hemisphere Survey and the Wide Infrared Survey Explorer (WISE) All Sky Survey.
Stenhouse, Gordon B.; Janz, David M.; Kapronczai, Luciene; Anne Erlenbach, Joy; Jansen, Heiko T.; Nelson, O. Lynne; Robbins, Charles T.; Boulanger, John
2017-01-01
Abstract Recognizing the potential value of steroid hormone measurements to augment non-invasive genetic sampling, we developed procedures based on enzyme-linked immunoassays to quantify reproductive steroid hormone concentrations in brown bear (Ursus arctos) hair. Then, using 94 hair samples collected from eight captive adult bears over a 2-year period, we evaluated (i) associations between hair concentrations of testosterone, progesterone, estradiol and cortisol; (ii) the effect of collecting by shaving vs. plucking; and (iii) the utility of reproductive hormone profiles to differentiate sex and reproductive state. Sample requirements (125 mg of guard hair) to assay all hormones exceeded amounts typically obtained by non-invasive sampling. Thus, broad application of this approach will require modification of non-invasive techniques to collect larger samples, use of mixed (guard and undercoat) hair samples and/or application of more sensitive laboratory procedures. Concentrations of hormones were highly correlated suggesting their sequestration in hair reflects underlying physiological processes. Marked changes in hair hormone levels during the quiescent phase of the hair cycle, coupled with the finding that progesterone concentrations, and their association with testosterone levels, differed markedly between plucked and shaved hair samples, suggests steroids sequestered in hair were likely derived from various sources, including skin. Changes in hair hormone concentrations over time, and in conjunction with key reproductive events, were similar to what has been reported concerning hormonal changes in the blood serum of brown bears. Thus, potential for the measurement of hair reproductive hormone levels to augment non-invasive genetic sampling appears compelling. Nonetheless, we are conducting additional validation studies on hair collected from free-ranging bears, representative of all sex, age and reproductive classes, to fully evaluate the utility of this approach for brown bear conservation and research. PMID:28580147
Environmental Validation of Legionella Control in a VHA Facility Water System.
Jinadatha, Chetan; Stock, Eileen M; Miller, Steve E; McCoy, William F
2018-03-01
OBJECTIVES We conducted this study to determine what sample volume, concentration, and limit of detection (LOD) are adequate for environmental validation of Legionella control. We also sought to determine whether time required to obtain culture results can be reduced compared to spread-plate culture method. We also assessed whether polymerase chain reaction (PCR) and in-field total heterotrophic aerobic bacteria (THAB) counts are reliable indicators of Legionella in water samples from buildings. DESIGN Comparative Legionella screening and diagnostics study for environmental validation of a healthcare building water system. SETTING Veterans Health Administration (VHA) facility water system in central Texas. METHODS We analyzed 50 water samples (26 hot, 24 cold) from 40 sinks and 10 showers using spread-plate cultures (International Standards Organization [ISO] 11731) on samples shipped overnight to the analytical lab. In-field, on-site cultures were obtained using the PVT (Phigenics Validation Test) culture dipslide-format sampler. A PCR assay for genus-level Legionella was performed on every sample. RESULTS No practical differences regardless of sample volume filtered were observed. Larger sample volumes yielded more detections of Legionella. No statistically significant differences at the 1 colony-forming unit (CFU)/mL or 10 CFU/mL LOD were observed. Approximately 75% less time was required when cultures were started in the field. The PCR results provided an early warning, which was confirmed by spread-plate cultures. The THAB results did not correlate with Legionella status. CONCLUSIONS For environmental validation at this facility, we confirmed that (1) 100 mL sample volumes were adequate, (2) 10× concentrations were adequate, (3) 10 CFU/mL LOD was adequate, (4) in-field cultures reliably reduced time to get results by 75%, (5) PCR provided a reliable early warning, and (6) THAB was not predictive of Legionella results. Infect Control Hosp Epidemiol 2018;39:259-266.
7 CFR 201.52 - Noxious-weed seeds.
Code of Federal Regulations, 2013 CFR
2013-01-01
... the bulk examined for noxious-weed seeds need not be noted: 1/2-gram purity working sample, 16 or more seeds; 1-gram purity working sample, 23 or more seeds; 2-gram purity working sample or larger, 30 or...
Devices for SRF material characterization
Goudket, Philippe; Xiao, B.; Junginger, T.
2016-10-07
The surface resistance Rs of superconducting materials can be obtained by measuring the quality factor of an elliptical cavity excited in a transverse magnetic mode (TM010). The value obtained has however to be taken as averaged over the whole surface. A more convenient way to obtain Rs, especially of materials which are not yet technologically ready for cavity production, is to measure small samples instead. These can be easily man ufactured at low cost, duplicated and placed in film deposition and surface analytical tools. A commonly used design for a device to measure Rs consists of a cylindrical cavity excitedmore » in a transverse electric (TE110) mode with the sample under test serving as one replaceable endplate. Such a cavity has two drawbacks. For reasonably small samples the resonant frequency will be larger than frequencies of interest concerning SRF application and it requires a reference sample of known Rs. In this article we review several devices which have been designed to overcome these limitations, reaching sub - nΩ resolution in some cases. Some of these devices also comprise a parameter space in frequency and temperature which is inaccessible to standard cavity tests, making them ideal tools to test theoretical surface resistance models.« less
Devices for SRF material characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goudket, Philippe; Xiao, B.; Junginger, T.
The surface resistance Rs of superconducting materials can be obtained by measuring the quality factor of an elliptical cavity excited in a transverse magnetic mode (TM010). The value obtained has however to be taken as averaged over the whole surface. A more convenient way to obtain Rs, especially of materials which are not yet technologically ready for cavity production, is to measure small samples instead. These can be easily man ufactured at low cost, duplicated and placed in film deposition and surface analytical tools. A commonly used design for a device to measure Rs consists of a cylindrical cavity excitedmore » in a transverse electric (TE110) mode with the sample under test serving as one replaceable endplate. Such a cavity has two drawbacks. For reasonably small samples the resonant frequency will be larger than frequencies of interest concerning SRF application and it requires a reference sample of known Rs. In this article we review several devices which have been designed to overcome these limitations, reaching sub - nΩ resolution in some cases. Some of these devices also comprise a parameter space in frequency and temperature which is inaccessible to standard cavity tests, making them ideal tools to test theoretical surface resistance models.« less
Özbek, Emel; Bongers, Ilja L; Lobbestael, Jill; van Nieuwenhuizen, Chijs
2015-12-01
This study investigated the relationship between acculturation and psychological problems in Turkish and Moroccan young adults living in the Netherlands. A sample of 131 healthy young adults aged between 18 and 24 years old, with a Turkish or Moroccan background was recruited using snowball sampling. Data on acculturation, internalizing and externalizing problems, beliefs about psychological problems, attributions of psychological problems and barriers to care were collected and analyzed using Latent Class Analysis and multinomial logistic regression. Three acculturation classes were identified in moderately to highly educated, healthy Turkish or Moroccan young adults: integration, separation and diffusion. None of the participants in the sample were marginalized or assimilated. Young adults reporting diffuse acculturation reported more internalizing and externalizing problems than those who were integrated or separated. Separated young adults reported experiencing more practical barriers to care than integrated young adults. Further research with a larger sample, including young adult migrants using mental health services, is required to improve our understanding of acculturation, psychological problems and barriers to care in this population. Including experiences of discrimination in the model might improve our understanding of the relationship between different forms of acculturation and psychological problems.
Graf, Alexandra C; Bauer, Peter
2011-06-30
We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.
Nanoengineered capsules for selective SERS analysis of biological samples
NASA Astrophysics Data System (ADS)
You, Yil-Hwan; Schechinger, Monika; Locke, Andrea; Coté, Gerard; McShane, Mike
2018-02-01
Metal nanoparticles conjugated with DNA oligomers have been intensively studied for a variety of applications, including optical diagnostics. Assays based on aggregation of DNA-coated particles in proportion to the concentration of target analyte have not been widely adopted for clinical analysis, however, largely due to the nonspecific responses observed in complex biofluids. While sample pre-preparation such as dialysis is helpful to enable selective sensing, here we sought to prove that assay encapsulation in hollow microcapsules could remove this requirement and thereby facilitate more rapid analysis on complex samples. Gold nanoparticle-based assays were incorporated into capsules comprising polyelectrolyte multilayer (PEMs), and the response to small molecule targets and larger proteins were compared. Gold nanoparticles were able to selectively sense small Raman dyes (Rhodamine 6G) in the presence of large protein molecules (BSA) when encapsulated. A ratiometric based microRNA-17 sensing assay exhibited drastic reduction in response after encapsulation, with statistically-significant relative Raman intensity changes only at a microRNA-17 concentration of 10 nM compared to a range of 0-500 nM for the corresponding solution-phase response.
Aziz, Fahad
2012-09-01
Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) offers a minimally invasive alternative to mediastinoscopy with additional access to the hilar nodes, a better safety profile, and it removes the costs and hazards of theatre time and general anesthesia with comparable sensitivity, although the negative predictive value of mediastinoscopy (and sample size) is greater. EBUS- TBNA also obtains larger samples than conventional TBNA, has superior performance and theoretically is safer, allowing real-time sampling under direct vision. It can also have predictive value both in sonographic appearance of the nodes and histological characteristics. EBUS-TBNA is therefore indicated for NSCLC staging, diagnosis of lung cancer when there is no endobronchial lesion, and diagnosis of both benign (especially tuberculosis and sarcoidosis) and malignant mediastinal lesions. The procedure is different than for flexible bronchoscopy, takes longer, and requires more training. EBUS-TBNA is more expensive than conventional TBNA but can save costs by reducing the number of more costly mediastinoscopies. In the future, endobronchial ultrasound may have applications in airways disease and pulmonary vascular disease.
Patient satisfaction with nursing staff in bone marrow transplantation and hematology units.
Piras, A; Poddigue, M; Angelucci, E
2010-01-01
Several validated questionnaires for assessment of hospitalized patient satisfaction have been reported in the literature. Many have been designed specifically for patients with cancer. User satisfaction is one indicator of service quality and benefits. Thus, we conducted a small qualitative survey managed by nursing staff in our Bone Marrow Transplantation Unit and Acute Leukemia Unit, with the objectives of assessing patient satisfaction, determining critical existing problems, and developing required interventions. The sample was not probabilistic. A questionnaire was developed using the Delphi method in a pilot study with 30 patients. Analysis of the data suggested a good level of patient satisfaction with medical and nursing staffs (100%), but poor satisfaction with food (48%), services (38%), and amenities (31%). Limitations of the study were that the questionnaire was unvalidated and the sample was small. However, for the first time, patient satisfaction was directly measured at our hospital. Another qualitative study will be conducted after correction of the critical points that emerged during this initial study, in a larger sample of patients. Copyright 2010 Elsevier Inc. All rights reserved.
Sampling strategies for radio-tracking coyotes
Smith, G.J.; Cary, J.R.; Rongstad, O.J.
1981-01-01
Ten coyotes radio-tracked for 24 h periods were most active at night and moved little during daylight hours. Home-range size determined from radio-locations of 3 adult coyotes increased with the number of locations until an asymptote was reached at about 35-40 independent day locations or 3 6 nights of hourly radio-locations. Activity of the coyote did not affect the asymptotic nature of the home-range calculations, but home-range sizes determined from more than 3 nights of hourly locations were considerably larger than home-range sizes determined from daylight locations. Coyote home-range sizes were calculated from daylight locations, full-night tracking periods, and half-night tracking periods. Full- and half-lnight sampling strategies involved obtaining hourly radio-locations during 12 and 6 h periods, respectively. The half-night sampling strategy was the best compromise for our needs, as it adequately indexed the home-range size, reduced time and energy spent, and standardized the area calculation without requiring the researcher to become completely nocturnal. Sight tracking also provided information about coyote activity and sociability.
LC-MS based analysis of endogenous steroid hormones in human hair.
Gao, Wei; Kirschbaum, Clemens; Grass, Juliane; Stalder, Tobias
2016-09-01
The quantification of endogenous steroid hormone concentrations in hair is increasingly used as a method for obtaining retrospective information on long-term integrated hormone exposure. Several different analytical procedures have been employed for hair steroid analysis, with liquid chromatography-mass spectrometry (LC-MS) being recognized as a particularly powerful analytical tool. Several methodological aspects affect the performance of LC-MS systems for hair steroid analysis, including sample preparation and pretreatment, steroid extraction, post-incubation purification, LC methodology, ionization techniques and MS specifications. Here, we critically review the differential value of such protocol variants for hair steroid hormones analysis, focusing on both analytical quality and practical feasibility issues. Our results show that, when methodological challenges are adequately addressed, LC-MS protocols can not only yield excellent sensitivity and specificity but are also characterized by relatively simple sample processing and short run times. This makes LC-MS based hair steroid protocols particularly suitable as a high-quality option for routine application in research contexts requiring the processing of larger numbers of samples. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gao, Feng; Cai, Chengzheng; Yang, Yugui
2018-06-01
As liquid nitrogen is injected into a wellbore as fracturing fluid, it can rapidly absorb heat from warmer rock and generate cryogenic condition in downhole region. This will alter the physical conditions of reservoir rocks and further affect rock failure characteristics. To investigate rock fracture failure characteristics under liquid nitrogen cooling conditions, the fracture features of four types of sandstones and one type of marble were tested on original samples (the sample without any treatment) and cryogenic samples (the samples just taken out from the liquid nitrogen), respectively. The differences between original samples and cryogenic samples in load-displacement curves, fracture toughness, energy evolution and the crack density of ruptured samples were compared and analyzed. The results showed that at elastic deformation stage, cryogenic samples presented less plastic deformation and more obvious brittle failure characteristics than original ones. The average fracture toughness of cryogenic samples was 10.47%-158.33% greater than that of original ones, indicating that the mechanical strength of rocks used were enhanced under cooling conditions. When the samples ruptured, the cryogenic ones were required to absorb more energy and reserve more elastic energy. In general, the fracture degree of cryogenic samples was higher than that of original ones. As the samples were entirely fractured, the crack density of cryogenic samples was about 536.67% at most larger than that of original ones. This indicated that under liquid nitrogen cooling conditions, the stimulation reservoir volume is expected to be improved during fracturing. This work could provide a reference to the research on the mechanical properties and fracture failure of rock during liquid nitrogen fracturing.
Iles, Ray K; Cole, Laurence A; Butler, Stephen A
2014-06-05
The analysis of human chorionic gonadotropin (hCG) in clinical chemistry laboratories by specific immunoassay is well established. However, changes in glycosylation are not as easily assayed and yet alterations in hCG glycosylation is associated with abnormal pregnancy. hCGβ-core fragment (hCGβcf) was isolated from the urine of women, pregnant with normal, molar and hyperemesis gravidarum pregnancies. Each sample was subjected to matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI TOF MS) analysis following dithiothreitol (DTT) reduction and fingerprint spectra of peptide hCGβ 6-40 were analyzed. Samples were variably glycosylated, where most structures were small, core and largely mono-antennary. Larger single bi-antennary and mixtures of larger mono-antennary and bi-antennary moieties were also observed in some samples. Larger glycoforms were more abundant in the abnormal pregnancies and tri-antennary carbohydrate moieties were only observed in the samples from molar and hyperemesis gravidarum pregnancies. Given that such spectral profiling differences may be characteristic, development of small sample preparation for mass spectral analysis of hCG may lead to a simpler and faster approach to glycostructural analysis and potentially a novel clinical diagnostic test.
Basavanhally, Ajay; Viswanath, Satish; Madabhushi, Anant
2015-01-01
Clinical trials increasingly employ medical imaging data in conjunction with supervised classifiers, where the latter require large amounts of training data to accurately model the system. Yet, a classifier selected at the start of the trial based on smaller and more accessible datasets may yield inaccurate and unstable classification performance. In this paper, we aim to address two common concerns in classifier selection for clinical trials: (1) predicting expected classifier performance for large datasets based on error rates calculated from smaller datasets and (2) the selection of appropriate classifiers based on expected performance for larger datasets. We present a framework for comparative evaluation of classifiers using only limited amounts of training data by using random repeated sampling (RRS) in conjunction with a cross-validation sampling strategy. Extrapolated error rates are subsequently validated via comparison with leave-one-out cross-validation performed on a larger dataset. The ability to predict error rates as dataset size increases is demonstrated on both synthetic data as well as three different computational imaging tasks: detecting cancerous image regions in prostate histopathology, differentiating high and low grade cancer in breast histopathology, and detecting cancerous metavoxels in prostate magnetic resonance spectroscopy. For each task, the relationships between 3 distinct classifiers (k-nearest neighbor, naive Bayes, Support Vector Machine) are explored. Further quantitative evaluation in terms of interquartile range (IQR) suggests that our approach consistently yields error rates with lower variability (mean IQRs of 0.0070, 0.0127, and 0.0140) than a traditional RRS approach (mean IQRs of 0.0297, 0.0779, and 0.305) that does not employ cross-validation sampling for all three datasets. PMID:25993029
Variable size computer-aided detection prompts and mammography film reader decisions
Gilbert, Fiona J; Astley, Susan M; Boggis, Caroline RM; McGee, Magnus A; Griffiths, Pamela M; Duffy, Stephen W; Agbaje, Olorunsola F; Gillan, Maureen GC; Wilson, Mary; Jain, Anil K; Barr, Nicola; Beetles, Ursula M; Griffiths, Miriam A; Johnson, Jill; Roberts, Rita M; Deans, Heather E; Duncan, Karen A; Iyengar, Geeta
2008-01-01
Introduction The purpose of the present study was to investigate the effect of computer-aided detection (CAD) prompts on reader behaviour in a large sample of breast screening mammograms by analysing the relationship of the presence and size of prompts to the recall decision. Methods Local research ethics committee approval was obtained; informed consent was not required. Mammograms were obtained from women attending routine mammography at two breast screening centres in 1996. Films, previously double read, were re-read by a different reader using CAD. The study material included 315 cancer cases comprising all screen-detected cancer cases, all subsequent interval cancers and 861 normal cases randomly selected from 10,267 cases. Ground truth data were used to assess the efficacy of CAD prompting. Associations between prompt attributes and tumour features or reader recall decisions were assessed by chi-squared tests. Results There was a highly significant relationship between prompting and a decision to recall for cancer cases and for a random sample of normal cases (P < 0.001). Sixty-four per cent of all cases contained at least one CAD prompt. In cancer cases, larger prompts were more likely to be recalled (P = 0.02) for masses but there was no such association for calcifications (P = 0.9). In a random sample of 861 normal cases, larger prompts were more likely to be recalled (P = 0.02) for both mass and calcification prompts. Significant associations were observed with prompting and breast density (p = 0.009) for cancer cases but not for normal cases (P = 0.05). Conclusions For both normal cases and cancer cases, prompted mammograms were more likely to be recalled and the prompt size was also associated with a recall decision. PMID:18724867
Variable size computer-aided detection prompts and mammography film reader decisions.
Gilbert, Fiona J; Astley, Susan M; Boggis, Caroline Rm; McGee, Magnus A; Griffiths, Pamela M; Duffy, Stephen W; Agbaje, Olorunsola F; Gillan, Maureen Gc; Wilson, Mary; Jain, Anil K; Barr, Nicola; Beetles, Ursula M; Griffiths, Miriam A; Johnson, Jill; Roberts, Rita M; Deans, Heather E; Duncan, Karen A; Iyengar, Geeta
2008-01-01
The purpose of the present study was to investigate the effect of computer-aided detection (CAD) prompts on reader behaviour in a large sample of breast screening mammograms by analysing the relationship of the presence and size of prompts to the recall decision. Local research ethics committee approval was obtained; informed consent was not required. Mammograms were obtained from women attending routine mammography at two breast screening centres in 1996. Films, previously double read, were re-read by a different reader using CAD. The study material included 315 cancer cases comprising all screen-detected cancer cases, all subsequent interval cancers and 861 normal cases randomly selected from 10,267 cases. Ground truth data were used to assess the efficacy of CAD prompting. Associations between prompt attributes and tumour features or reader recall decisions were assessed by chi-squared tests. There was a highly significant relationship between prompting and a decision to recall for cancer cases and for a random sample of normal cases (P < 0.001). Sixty-four per cent of all cases contained at least one CAD prompt. In cancer cases, larger prompts were more likely to be recalled (P = 0.02) for masses but there was no such association for calcifications (P = 0.9). In a random sample of 861 normal cases, larger prompts were more likely to be recalled (P = 0.02) for both mass and calcification prompts. Significant associations were observed with prompting and breast density (p = 0.009) for cancer cases but not for normal cases (P = 0.05). For both normal cases and cancer cases, prompted mammograms were more likely to be recalled and the prompt size was also associated with a recall decision.
The use of mini-samples in palaeomagnetism
NASA Astrophysics Data System (ADS)
Böhnel, Harald; Michalk, Daniel; Nowaczyk, Norbert; Naranjo, Gildardo Gonzalez
2009-10-01
Rock cores of ~25 mm diameter are widely used in palaeomagnetism. Occasionally smaller diameters have been used as well which represents distinct advantages in terms of throughput, weight of equipment and core collections. How their orientation precision compares to 25 mm cores, however, has not been evaluated in detail before. Here we compare the site mean directions and their statistical parameters for 12 lava flows sampled with 25 mm cores (standard samples, typically 8 cores per site) and with 12 mm drill cores (mini-samples, typically 14 cores per site). The site-mean directions for both sample sizes appear to be indistinguishable in most cases. For the mini-samples, site dispersion parameters k on average are slightly lower than for the standard samples reflecting their larger orienting and measurement errors. Applying the Wilcoxon signed-rank test the probability that k or α95 have the same distribution for both sizes is acceptable only at the 17.4 or 66.3 per cent level, respectively. The larger mini-core numbers per site appears to outweigh the lower k values yielding also slightly smaller confidence limits α95. Further, both k and α95 are less variable for mini-samples than for standard size samples. This is interpreted also to result from the larger number of mini-samples per site, which better averages out the detrimental effect of undetected abnormal remanence directions. Sampling of volcanic rocks with mini-samples therefore does not present a disadvantage in terms of the overall obtainable uncertainty of site mean directions. Apart from this, mini-samples do present clear advantages during the field work, as about twice the number of drill cores can be recovered compared to 25 mm cores, and the sampled rock unit is then more widely covered, which reduces the contribution of natural random errors produced, for example, by fractures, cooling joints, and palaeofield inhomogeneities. Mini-samples may be processed faster in the laboratory, which is of particular advantage when carrying out palaeointensity experiments.
Shirazi, Mohammadali; Reddy Geedipally, Srinivas; Lord, Dominique
2017-01-01
Severity distribution functions (SDFs) are used in highway safety to estimate the severity of crashes and conduct different types of safety evaluations and analyses. Developing a new SDF is a difficult task and demands significant time and resources. To simplify the process, the Highway Safety Manual (HSM) has started to document SDF models for different types of facilities. As such, SDF models have recently been introduced for freeway and ramps in HSM addendum. However, since these functions or models are fitted and validated using data from a few selected number of states, they are required to be calibrated to the local conditions when applied to a new jurisdiction. The HSM provides a methodology to calibrate the models through a scalar calibration factor. However, the proposed methodology to calibrate SDFs was never validated through research. Furthermore, there are no concrete guidelines to select a reliable sample size. Using extensive simulation, this paper documents an analysis that examined the bias between the 'true' and 'estimated' calibration factors. It was indicated that as the value of the true calibration factor deviates further away from '1', more bias is observed between the 'true' and 'estimated' calibration factors. In addition, simulation studies were performed to determine the calibration sample size for various conditions. It was found that, as the average of the coefficient of variation (CV) of the 'KAB' and 'C' crashes increases, the analyst needs to collect a larger sample size to calibrate SDF models. Taking this observation into account, sample-size guidelines are proposed based on the average CV of crash severities that are used for the calibration process. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kántor, T.; Maestre, S.; de Loos-Vollebregt, M. T. C.
2005-10-01
In the present work electrothermal vaporization (ETV) was used in both inductively coupled plasma mass spectrometry (ICP-MS) and optical emission spectrometry (OES) for sample introduction of solution samples. The effect of (Pd + Mg)-nitrate modifier and CaCl 2 matrix/modifier of variable amounts were studied on ETV-ICP-MS signals of Cr, Cu, Fe, Mn and Pb and on ETV-ICP-OES signals of Ag, Cd, Co, Cu, Fe, Ga, Mn and Zn. With the use of matrix-free standard solutions the analytical curves were bent to the signal axes (as expected from earlier studies), which was observed in the 20-800 pg mass range by ICP-MS and in the 1-50 ng mass range by ICP-OES detection. The degree of curvature was, however, different with the use of single element and multi-element standards. When applying the noted chemical modifiers (aerosol carriers) in microgram amounts, linear analytical curves were found in the nearly two orders of magnitude mass ranges. Changes of the CaCl 2 matrix concentration (loaded amount of 2-10 μg Ca) resulted in less than 5% changes in MS signals of 5 elements (each below 1 ng) and OES signals of 22 analytes (each below 15 ng). Exceptions were Pb (ICP-MS) and Cd (ICP-OES), where the sensitivity increase by Pd + Mg modifier was much larger compared to other elements studied. The general conclusions suggest that quantitative analysis with the use of ETV sample introduction requires matrix matching or matrix replacement by appropriate chemical modifier to the specific concentration ranges of analytes. This is a similar requirement to that claimed also by the most commonly used pneumatic nebulization of solutions, if samples with high matrix concentration are concerned.
Coutelot, F; Sappin-Didier, V; Keller, C; Atteia, O
2014-12-01
The unsaturated zone plays a major role in elemental fluxes in terrestrial ecosystems. A representative chemical analysis of soil pore water is required for the interpretation of soil chemical phenomena and particularly to assess Trace Elements (TEs) mobility. This requires an optimal sampling system to avoid modification of the extracted soil water chemistry and allow for an accurate estimation of solute fluxes. In this paper, the chemical composition of soil solutions sampled by Rhizon® samplers connected to a standard syringe was compared to two other types of suction probes (Rhizon® + vacuum tube and Rhizon® + diverted flow system). We investigated the effects of different vacuum application procedures on concentrations of spiked elements (Cr, As, Zn) mixed as powder into the first 20 cm of 100-cm columns and non-spiked elements (Ca, Na, Mg) concentrations in two types of columns (SiO2 sand and a mixture of kaolinite + SiO2 sand substrates). Rhizon® was installed at different depths. The metals concentrations showed that (i) in sand, peak concentrations cannot be correctly sampled, thus the flux cannot be estimated, and the errors can easily reach a factor 2; (ii) in sand + clay columns, peak concentrations were larger, indicating that they could be sampled but, due to sorption on clay, it was not possible to compare fluxes at different depths. The different samplers tested were not able to reflect the elemental flux to groundwater and, although the Rhizon® + syringe device was more accurate, the best solution remains to be the use of a lysimeter, whose bottom is kept continuously at a suction close to the one existing in the soil.
Apparatus for Measuring Total Emissivity of Small, Low-Emissivity Samples
NASA Technical Reports Server (NTRS)
Tuttle, James; DiPirro, Michael J.
2011-01-01
An apparatus was developed for measuring total emissivity of small, lightweight, low-emissivity samples at low temperatures. The entire apparatus fits inside a small laboratory cryostat. Sample installation and removal are relatively quick, allowing for faster testing. The small chamber surrounding the sample is lined with black-painted aluminum honeycomb, which simplifies data analysis. This results in the sample viewing a very high-emissivity surface on all sides, an effect which would normally require a much larger chamber volume. The sample and chamber temperatures are individually controlled using off-the-shelf PID (proportional integral derivative) controllers, allowing flexibility in the test conditions. The chamber can be controlled at a higher temperature than the sample, allowing a direct absorptivity measurement. The lightweight sample is suspended by its heater and thermometer leads from an isothermal bar external to the chamber. The wires run out of the chamber through small holes in its corners, and the wires do not contact the chamber itself. During a steady-state measurement, the thermometer and bar are individually controlled at the same temperature, so there is zero heat flow through the wires. Thus, all of sample-temperature-control heater power is radiated to the chamber. Double-aluminized Kapton (DAK) emissivity was studied down to 10 K, which was about 25 K colder than any previously reported measurements. This verified a minimum in the emissivity at about 35 K and a rise as the temperature dropped to lower values.
Maintaining and Enhancing Diversity of Sampled Protein Conformations in Robotics-Inspired Methods.
Abella, Jayvee R; Moll, Mark; Kavraki, Lydia E
2018-01-01
The ability to efficiently sample structurally diverse protein conformations allows one to gain a high-level view of a protein's energy landscape. Algorithms from robot motion planning have been used for conformational sampling, and several of these algorithms promote diversity by keeping track of "coverage" in conformational space based on the local sampling density. However, large proteins present special challenges. In particular, larger systems require running many concurrent instances of these algorithms, but these algorithms can quickly become memory intensive because they typically keep previously sampled conformations in memory to maintain coverage estimates. In addition, robotics-inspired algorithms depend on defining useful perturbation strategies for exploring the conformational space, which is a difficult task for large proteins because such systems are typically more constrained and exhibit complex motions. In this article, we introduce two methodologies for maintaining and enhancing diversity in robotics-inspired conformational sampling. The first method addresses algorithms based on coverage estimates and leverages the use of a low-dimensional projection to define a global coverage grid that maintains coverage across concurrent runs of sampling. The second method is an automatic definition of a perturbation strategy through readily available flexibility information derived from B-factors, secondary structure, and rigidity analysis. Our results show a significant increase in the diversity of the conformations sampled for proteins consisting of up to 500 residues when applied to a specific robotics-inspired algorithm for conformational sampling. The methodologies presented in this article may be vital components for the scalability of robotics-inspired approaches.
Shawna, Wicks; M., Taylor Christopher; Meng, Luo; Eugene, Blanchard IV; David, Ribnicky; T., Cefalu William; L., Mynatt Randall; A., Welsh David
2014-01-01
Objective The gut microbiome has been implicated in obesity and metabolic syndrome; however, most studies have focused on fecal or colonic samples. Several species of Artemisia have been reported to ameliorate insulin signaling both in vitro and in vivo. The aim of this study was to characterize the mucosal and luminal bacterial populations in the terminal ileum with or without supplementation with Artemisia extracts. Materials/Methods Following 4 weeks of supplementation with different Artemisia extracts (PMI 5011, Santa or Scopa), diet-induced obese mice were sacrificed and luminal and mucosal samples of terminal ileum were used to evaluate microbial community composition by pyrosequencing of 16S rDNA hypervariable regions. Results Significant differences in community structure and membership were observed between luminal and mucosal samples, irrespective of diet group. All Artemisia extracts increased the Bacteroidetes:Firmicutes ratio in mucosal samples. This effect was not observed in the luminal compartment. There was high inter-individual variability in the phylogenetic assessments of the ileal microbiota, limiting the statistical power of this pilot investigation. Conclusions Marked differences in bacterial communities exist dependent upon the biogeographic compartment in the terminal ileum. Future studies testing the effects of Artemisia or other botanical supplements require larger sample sizes for adequate statistical power. PMID:24985102
NASA Astrophysics Data System (ADS)
Xie, Xing; Bahnemann, Janina; Wang, Siwen; Yang, Yang; Hoffmann, Michael R.
2016-02-01
Detection and quantification of pathogens in water is critical for the protection of human health and for drinking water safety and security. When the pathogen concentrations are low, large sample volumes (several liters) are needed to achieve reliable quantitative results. However, most microbial identification methods utilize relatively small sample volumes. As a consequence, a concentration step is often required to detect pathogens in natural waters. Herein, we introduce a novel water sample concentration method based on superabsorbent polymer (SAP) beads. When SAP beads swell with water, small molecules can be sorbed within the beads, but larger particles are excluded and, thus, concentrated in the residual non-sorbed water. To illustrate this approach, millimeter-sized poly(acrylamide-co-itaconic acid) (P(AM-co-IA)) beads are synthesized and successfully applied to concentrate water samples containing two model microorganisms: Escherichia coli and bacteriophage MS2. Experimental results indicate that the size of the water channel within water swollen P(AM-co-IA) hydrogel beads is on the order of several nanometers. The millimeter size coupled with a negative surface charge of the beads are shown to be critical in order to achieve high levels of concentration. This new concentration procedure is very fast, effective, scalable, and low-cost with no need for complex instrumentation.
Apell, Jennifer N; Gschwend, Philip M
2016-11-01
Superfund sites with sediments contaminated by hydrophobic organic compounds (HOCs) can be difficult to characterize because of the complex nature of sorption to sediments. Porewater concentrations, which are often used to model transport of HOCs from the sediment bed into overlying water, benthic organisms, and the larger food web, are traditionally estimated using sediment concentrations and sorption coefficients estimated using equilibrium partitioning (EqP) theory. However, researchers have begun using polymeric samplers to determine porewater concentrations since this method does not require knowledge of the sediment's sorption properties. In this work, polyethylene passive samplers were deployed into sediments in the field (in situ passive sampling) and mixed with sediments in the laboratory (ex situ active sampling) that were contaminated with polychlorinated biphenyls (PCBs). The results show that porewater concentrations based on in situ and ex situ sampling generally agreed within a factor of two, but in situ concentrations were consistently lower than ex situ porewater concentrations. Imprecision arising from in situ passive sampling procedures does not explain this bias suggesting that field processes like bioirrigation may cause the differences observed between in situ and ex situ polymeric samplers. Copyright © 2016 Elsevier Ltd. All rights reserved.
Xie, Xing; Bahnemann, Janina; Wang, Siwen; Yang, Yang; Hoffmann, Michael R
2016-02-15
Detection and quantification of pathogens in water is critical for the protection of human health and for drinking water safety and security. When the pathogen concentrations are low, large sample volumes (several liters) are needed to achieve reliable quantitative results. However, most microbial identification methods utilize relatively small sample volumes. As a consequence, a concentration step is often required to detect pathogens in natural waters. Herein, we introduce a novel water sample concentration method based on superabsorbent polymer (SAP) beads. When SAP beads swell with water, small molecules can be sorbed within the beads, but larger particles are excluded and, thus, concentrated in the residual non-sorbed water. To illustrate this approach, millimeter-sized poly(acrylamide-co-itaconic acid) (P(AM-co-IA)) beads are synthesized and successfully applied to concentrate water samples containing two model microorganisms: Escherichia coli and bacteriophage MS2. Experimental results indicate that the size of the water channel within water swollen P(AM-co-IA) hydrogel beads is on the order of several nanometers. The millimeter size coupled with a negative surface charge of the beads are shown to be critical in order to achieve high levels of concentration. This new concentration procedure is very fast, effective, scalable, and low-cost with no need for complex instrumentation.
Xie, Xing; Bahnemann, Janina; Wang, Siwen; Yang, Yang; Hoffmann, Michael R.
2016-01-01
Detection and quantification of pathogens in water is critical for the protection of human health and for drinking water safety and security. When the pathogen concentrations are low, large sample volumes (several liters) are needed to achieve reliable quantitative results. However, most microbial identification methods utilize relatively small sample volumes. As a consequence, a concentration step is often required to detect pathogens in natural waters. Herein, we introduce a novel water sample concentration method based on superabsorbent polymer (SAP) beads. When SAP beads swell with water, small molecules can be sorbed within the beads, but larger particles are excluded and, thus, concentrated in the residual non-sorbed water. To illustrate this approach, millimeter-sized poly(acrylamide-co-itaconic acid) (P(AM-co-IA)) beads are synthesized and successfully applied to concentrate water samples containing two model microorganisms: Escherichia coli and bacteriophage MS2. Experimental results indicate that the size of the water channel within water swollen P(AM-co-IA) hydrogel beads is on the order of several nanometers. The millimeter size coupled with a negative surface charge of the beads are shown to be critical in order to achieve high levels of concentration. This new concentration procedure is very fast, effective, scalable, and low-cost with no need for complex instrumentation. PMID:26876979
Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M
2018-04-01
A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.
Across-cohort QC analyses of GWAS summary statistics from complex traits.
Chen, Guo-Bo; Lee, Sang Hong; Robinson, Matthew R; Trzaskowski, Maciej; Zhu, Zhi-Xiang; Winkler, Thomas W; Day, Felix R; Croteau-Chonka, Damien C; Wood, Andrew R; Locke, Adam E; Kutalik, Zoltán; Loos, Ruth J F; Frayling, Timothy M; Hirschhorn, Joel N; Yang, Jian; Wray, Naomi R; Visscher, Peter M
2016-01-01
Genome-wide association studies (GWASs) have been successful in discovering SNP trait associations for many quantitative traits and common diseases. Typically, the effect sizes of SNP alleles are very small and this requires large genome-wide association meta-analyses (GWAMAs) to maximize statistical power. A trend towards ever-larger GWAMA is likely to continue, yet dealing with summary statistics from hundreds of cohorts increases logistical and quality control problems, including unknown sample overlap, and these can lead to both false positive and false negative findings. In this study, we propose four metrics and visualization tools for GWAMA, using summary statistics from cohort-level GWASs. We propose methods to examine the concordance between demographic information, and summary statistics and methods to investigate sample overlap. (I) We use the population genetics F st statistic to verify the genetic origin of each cohort and their geographic location, and demonstrate using GWAMA data from the GIANT Consortium that geographic locations of cohorts can be recovered and outlier cohorts can be detected. (II) We conduct principal component analysis based on reported allele frequencies, and are able to recover the ancestral information for each cohort. (III) We propose a new statistic that uses the reported allelic effect sizes and their standard errors to identify significant sample overlap or heterogeneity between pairs of cohorts. (IV) To quantify unknown sample overlap across all pairs of cohorts, we propose a method that uses randomly generated genetic predictors that does not require the sharing of individual-level genotype data and does not breach individual privacy.
Across-cohort QC analyses of GWAS summary statistics from complex traits
Chen, Guo-Bo; Lee, Sang Hong; Robinson, Matthew R; Trzaskowski, Maciej; Zhu, Zhi-Xiang; Winkler, Thomas W; Day, Felix R; Croteau-Chonka, Damien C; Wood, Andrew R; Locke, Adam E; Kutalik, Zoltán; Loos, Ruth J F; Frayling, Timothy M; Hirschhorn, Joel N; Yang, Jian; Wray, Naomi R; Visscher, Peter M
2017-01-01
Genome-wide association studies (GWASs) have been successful in discovering SNP trait associations for many quantitative traits and common diseases. Typically, the effect sizes of SNP alleles are very small and this requires large genome-wide association meta-analyses (GWAMAs) to maximize statistical power. A trend towards ever-larger GWAMA is likely to continue, yet dealing with summary statistics from hundreds of cohorts increases logistical and quality control problems, including unknown sample overlap, and these can lead to both false positive and false negative findings. In this study, we propose four metrics and visualization tools for GWAMA, using summary statistics from cohort-level GWASs. We propose methods to examine the concordance between demographic information, and summary statistics and methods to investigate sample overlap. (I) We use the population genetics Fst statistic to verify the genetic origin of each cohort and their geographic location, and demonstrate using GWAMA data from the GIANT Consortium that geographic locations of cohorts can be recovered and outlier cohorts can be detected. (II) We conduct principal component analysis based on reported allele frequencies, and are able to recover the ancestral information for each cohort. (III) We propose a new statistic that uses the reported allelic effect sizes and their standard errors to identify significant sample overlap or heterogeneity between pairs of cohorts. (IV) To quantify unknown sample overlap across all pairs of cohorts, we propose a method that uses randomly generated genetic predictors that does not require the sharing of individual-level genotype data and does not breach individual privacy. PMID:27552965
Electromagnetic Compatibility (EMC) Requirements for Military and Commercial Equipment
2009-09-01
Commercial off-the shelf (COTS) use also provides access to what has become a much larger industrial base. With these goals in mind, the Secretary of...Band (110 – 300 GHz) 10–1 mm Microwave data links, radio astronomy , amateur radio, remote sensing, advanced weapons systems 12 Radiated EMI...to what has become a much larger industrial base. With these goals in mind, the Secretary of Defense issued a directive in June 1994 requiring the
Centennial increase in geomagnetic activity: Latitudinal differences and global estimates
NASA Astrophysics Data System (ADS)
Mursula, K.; Martini, D.
2006-08-01
We study here the centennial change in geomagnetic activity using the newly proposed Inter-Hour Variability (IHV) index. We correct the earlier estimates of the centennial increase by taking into account the effect of the change of the sampling of the magnetic field from one sample per hour to hourly means in the first years of the previous century. Since the IHV index is a variability index, the larger variability in the case of hourly sampling leads, without due correction, to excessively large values in the beginning of the century and an underestimated centennial increase. We discuss two ways to extract the necessary sampling calibration factors and show that they agree very well with each other. The effect of calibration is especially large at the midlatitude Cheltenham/Fredricksburg (CLH/FRD) station where the centennial increase changes from only 6% to 24% caused by calibration. Sampling calibration also leads to a larger centennial increase of global geomagnetic activity based on the IHV index. The results verify a significant centennial increase in global geomagnetic activity, in a qualitative agreement with the aa index, although a quantitative comparison is not warranted. We also find that the centennial increase has a rather strong and curious latitudinal dependence. It is largest at high latitudes. Quite unexpectedly, it is larger at low latitudes than at midlatitudes. These new findings indicate interesting long-term changes in near-Earth space. We also discuss possible internal and external causes for these observed differences. The centennial change of geomagnetic activity may be partly affected by changes in external conditions, partly by the secular decrease of the Earth's magnetic moment whose effect in near-Earth space may be larger than estimated so far.
NASA Astrophysics Data System (ADS)
Trippetta, Fabio; Ruggieri, Roberta; Geremia, Davide; Brandano, Marco
2017-04-01
Understanding hydraulic and mechanical processes that acted in reservoir rocks and their effect on the rock properties is of a great interest for both scientific and industry fields. In this work we investigate the role of hydrocarbons in changing the petrophysical properties of rock by merging laboratory, outcrops, and subsurface data focusing on the carbonate-bearing Majella reservoir (Bolognano formation). This reservoir represents an interesting analogue for subsurface carbonate reservoirs and is made of high porosity (8 to 28%) ramp calcarenites saturated by hydrocarbon in the state of bitumen at the surface. Within this lithology clean and bitumen bearing samples were investigated. For both groups, density, porosity, P and S wave velocity, at increasing confining pressure and deformation tests were conducted on cylindrical specimens with BRAVA apparatus at the HP-HT Laboratory of the Istituto Nazionale di Geofisica e Vulcanologia (INGV) in Rome, Italy. The performed petrophysical characterization, shows a very good correlation between Vp, Vs and porosity and a pressure independent Vp/Vs ratio while the presence of bitumen within samples increases both Vp and Vs. P-wave velocity hysteresis measured at ambient pressure after 100 MPa of applied confining pressure, suggests an almost pure elastic behaviour for bitumen-bearing samples and a more inelastic behaviour for cleaner samples. Calculated dynamic Young's modulus is larger for bitumen-bearing samples and these data are confirmed by cyclic deformation tests where the same samples generally record larger strength, larger Young's modulus and smaller permanent strain respect to clean samples. Starting from laboratory data, we also derived a synthetic acoustic model highlighting an increase in acoustic impedance for bitumen-bearing samples. Models have been also performed simulating a saturation with decreasing API° hydrocarbons, showing opposite effects on the seismic properties of the reservoir respect to bitumen. In order to compare our laboratory results at larger scale we selected 11 outcrops of the same lithofacies of laboratory samples both clean and bitumen-saturated. Fractures orientations, from the scan-line method, are similar for the two types of outcrops and they follow the same trends of literature data collected on older rocks. On the other hand, spacing data show very lower fracture density for bitumen-saturated outcrops confirming laboratory observations. In conclusion, laboratory experiments highlight a more elastic behaviour for bitumen-bearing samples and saturated outcrops are less prone to fracture respect to clean outcrops. Presence of bitumen has, thus, a positive influence on mechanical properties of the reservoir while acoustic model suggests that lighter oils should have an opposite effect. Geologically, this suggests that hydrocarbons migration in the study area predates the last stage of deformation giving also clues about a relatively high density of the oil when deformation began.
Murata, Tsuyoshi; Hieda, Junko; Saito, Nagahiro; Takai, Osamu
2012-05-01
SiO2-added MgF2 nanoparticle coatings with various surface roughness properties were formed on fused silica-glass substrates from autoclaved sols prepared at 100-180 °C. To give it hydrophobicity, we treated the samples with fluoro-alkyl silane (FAS) vapor to form self-assembled monolayers on the nanoparticle coating and we examined the wettability of the samples. The samples preserved good transparency even after the FAS treatment. The wettability examination revealed that higher autoclave temperatures produced a larger average MgF2 nanoparticle particle size, a larger surface roughness, and a higher contact angle and the roll-off angle.
Design and characterization of an irradiation facility with real-time monitoring
NASA Astrophysics Data System (ADS)
Braisted, Jonathan David
Radiation causes performance degradation in electronics by inducing atomic displacements and ionizations. While radiation hardened components are available, non-radiation hardened electronics can be preferable because they are generally more compact, require less power, and less expensive than radiation tolerant equivalents. It is therefore important to characterize the performance of electronics, both hardened and non-hardened, to prevent costly system or mission failures. Radiation effects tests for electronics generally involve a handful of step irradiations, leading to poorly-resolved data. Step irradiations also introduce uncertainties in electrical measurements due to temperature annealing effects. This effect may be intensified if the time between exposure and measurement is significant. Induced activity in test samples also complicates data collection of step irradiated test samples. The University of Texas at Austin operates a 1.1 MW Mark II TRIGA research reactor. An in-core irradiation facility for radiation effects testing with a real-time monitoring capability has been designed for the UT TRIGA reactor. The facility is larger than any currently available non-central location in a TRIGA, supporting testing of larger electronic components as well as other in-core irradiation applications requiring significant volume such as isotope production or neutron transmutation doping of silicon. This dissertation describes the design and testing of the large in-core irradiation facility and the experimental campaign developed to test the real-time monitoring capability. This irradiation campaign was performed to test the real-time monitoring capability at various reactor power levels. The device chosen for characterization was the 4N25 general-purpose optocoupler. The current transfer ratio, which is an important electrical parameter for optocouplers, was calculated as a function of neutron fluence and gamma dose from the real-time voltage measurements. The resultant radiation effects data was seen to be repeatable and exceptionally finely-resolved. Therefore, the capability at UT TRIGA has been proven competitive with world-class effects characterization facilities.
Mauro, Francisco; Monleon, Vicente J; Temesgen, Hailemariam; Ford, Kevin R
2017-01-01
Forest inventories require estimates and measures of uncertainty for subpopulations such as management units. These units often times hold a small sample size, so they should be regarded as small areas. When auxiliary information is available, different small area estimation methods have been proposed to obtain reliable estimates for small areas. Unit level empirical best linear unbiased predictors (EBLUP) based on plot or grid unit level models have been studied more thoroughly than area level EBLUPs, where the modelling occurs at the management unit scale. Area level EBLUPs do not require a precise plot positioning and allow the use of variable radius plots, thus reducing fieldwork costs. However, their performance has not been examined thoroughly. We compared unit level and area level EBLUPs, using LiDAR auxiliary information collected for inventorying 98,104 ha coastal coniferous forest. Unit level models were consistently more accurate than area level EBLUPs, and area level EBLUPs were consistently more accurate than field estimates except for large management units that held a large sample. For stand density, volume, basal area, quadratic mean diameter, mean height and Lorey's height, root mean squared errors (rmses) of estimates obtained using area level EBLUPs were, on average, 1.43, 2.83, 2.09, 1.40, 1.32 and 1.64 times larger than those based on unit level estimates, respectively. Similarly, direct field estimates had rmses that were, on average, 1.37, 1.45, 1.17, 1.17, 1.26, and 1.38 times larger than rmses of area level EBLUPs. Therefore, area level models can lead to substantial gains in accuracy compared to direct estimates, and unit level models lead to very important gains in accuracy compared to area level models, potentially justifying the additional costs of obtaining accurate field plot coordinates.
Liley, James; Wallace, Chris
2015-02-01
Genome-wide association studies (GWAS) have been successful in identifying single nucleotide polymorphisms (SNPs) associated with many traits and diseases. However, at existing sample sizes, these variants explain only part of the estimated heritability. Leverage of GWAS results from related phenotypes may improve detection without the need for larger datasets. The Bayesian conditional false discovery rate (cFDR) constitutes an upper bound on the expected false discovery rate (FDR) across a set of SNPs whose p values for two diseases are both less than two disease-specific thresholds. Calculation of the cFDR requires only summary statistics and have several advantages over traditional GWAS analysis. However, existing methods require distinct control samples between studies. Here, we extend the technique to allow for some or all controls to be shared, increasing applicability. Several different SNP sets can be defined with the same cFDR value, and we show that the expected FDR across the union of these sets may exceed expected FDR in any single set. We describe a procedure to establish an upper bound for the expected FDR among the union of such sets of SNPs. We apply our technique to pairwise analysis of p values from ten autoimmune diseases with variable sharing of controls, enabling discovery of 59 SNP-disease associations which do not reach GWAS significance after genomic control in individual datasets. Most of the SNPs we highlight have previously been confirmed using replication studies or larger GWAS, a useful validation of our technique; we report eight SNP-disease associations across five diseases not previously declared. Our technique extends and strengthens the previous algorithm, and establishes robust limits on the expected FDR. This approach can improve SNP detection in GWAS, and give insight into shared aetiology between phenotypically related conditions.
Monleon, Vicente J.; Temesgen, Hailemariam; Ford, Kevin R.
2017-01-01
Forest inventories require estimates and measures of uncertainty for subpopulations such as management units. These units often times hold a small sample size, so they should be regarded as small areas. When auxiliary information is available, different small area estimation methods have been proposed to obtain reliable estimates for small areas. Unit level empirical best linear unbiased predictors (EBLUP) based on plot or grid unit level models have been studied more thoroughly than area level EBLUPs, where the modelling occurs at the management unit scale. Area level EBLUPs do not require a precise plot positioning and allow the use of variable radius plots, thus reducing fieldwork costs. However, their performance has not been examined thoroughly. We compared unit level and area level EBLUPs, using LiDAR auxiliary information collected for inventorying 98,104 ha coastal coniferous forest. Unit level models were consistently more accurate than area level EBLUPs, and area level EBLUPs were consistently more accurate than field estimates except for large management units that held a large sample. For stand density, volume, basal area, quadratic mean diameter, mean height and Lorey’s height, root mean squared errors (rmses) of estimates obtained using area level EBLUPs were, on average, 1.43, 2.83, 2.09, 1.40, 1.32 and 1.64 times larger than those based on unit level estimates, respectively. Similarly, direct field estimates had rmses that were, on average, 1.37, 1.45, 1.17, 1.17, 1.26, and 1.38 times larger than rmses of area level EBLUPs. Therefore, area level models can lead to substantial gains in accuracy compared to direct estimates, and unit level models lead to very important gains in accuracy compared to area level models, potentially justifying the additional costs of obtaining accurate field plot coordinates. PMID:29216290
Use of Smoothed Measured Winds to Predict and Assess Launch Environments
NASA Technical Reports Server (NTRS)
Cordova, Henry S.; Leahy, Frank; Adelfang, Stanley; Roberts, Barry; Starr, Brett; Duffin, Paul; Pueri, Daniel
2011-01-01
Since many of the larger launch vehicles are operated near their design limits during the ascent phase of flight to optimize payload to orbit, it often becomes necessary to verify that the vehicle will remain within certification limits during the ascent phase as part of the go/no-go review made prior to launch. This paper describes the approach used to predict Ares I-X launch vehicle structural air loads and controllability prior to launch which represents a distinct departure from the methodology of the Space Shuttle and Evolved Expendable Launch Vehicle (EELV) programs. Protection for uncertainty of key environment and trajectory parameters is added to the nominal assessment of launch capability to ensure that critical launch trajectory variables would be within the integrated vehicle certification envelopes. This process was applied by the launch team as a key element of the launch day go/no-go recommendation. Pre-launch assessments of vehicle launch capability for NASA's Space Shuttle and the EELV heavy lift versions require the use of a high-resolution wind profile measurements, which have relatively small sample size compared with low-resolution profile databases (which include low-resolution balloons and radar wind profilers). The approach described in this paper has the potential to allow the pre-launch assessment team to use larger samples of wind measurements from low-resolution wind profile databases that will improve the accuracy of pre-launch assessments of launch availability with no degradation of mission assurance or launch safety.
Conservation triage or injurious neglect in endangered species recovery.
Gerber, Leah R
2016-03-29
Listing endangered and threatened species under the US Endangered Species Act is presumed to offer a defense against extinction and a solution to achieve recovery of imperiled populations, but only if effective conservation action ensues after listing occurs. The amount of government funding available for species protection and recovery is one of the best predictors of successful recovery; however, government spending is both insufficient and highly disproportionate among groups of species, and there is significant discrepancy between proposed and actualized budgets across species. In light of an increasing list of imperiled species requiring evaluation and protection, an explicit approach to allocating recovery funds is urgently needed. Here I provide a formal decision-theoretic approach focusing on return on investment as an objective and a transparent mechanism to achieve the desired recovery goals. I found that less than 25% of the $1.21 billion/year needed for implementing recovery plans for 1,125 species is actually allocated to recovery. Spending in excess of the recommended recovery budget does not necessarily translate into better conservation outcomes. Rather, elimination of only the budget surplus for "costly yet futile" recovery plans can provide sufficient funding to erase funding deficits for more than 180 species. Triage by budget compression provides better funding for a larger sample of species, and a larger sample of adequately funded recovery plans should produce better outcomes even if by chance. Sharpening our focus on deliberate decision making offers the potential to achieve desired outcomes in avoiding extinction for Endangered Species Act-listed species.
Habitat fragmentation effects on birds in grasslands and wetlands: A critique of our knowledge
Johnson, D.H.
2001-01-01
Habitat fragmentation exacerbates the problem of habitat loss for grassland and wetland birds. Remaining patches of grasslands and wetlands may be too small, too isolated, and too influenced by edge effects to maintain viable populations of some breeding birds. Knowledge of the effects of fragmentation on bird populations is critically important for decisions about reserve design, grassland and wetland management, and implementation of cropland set-aside programs that benefit wildlife. In my review of research that has been conducted on habitat fragmentation, I found at least five common problems in the methodology used. The results of many studies are compromised by these problems: passive sampling (sampling larger areas in larger patches), confounding effects of habitat heterogeneity, consequences of inappropriate pooling of data from different species, artifacts associated with artificial nest data, and definition of actual habitat patches. As expected, some large-bodied birds with large territorial requirements, such as the northern harrier (Circus cyaneus), appear area sensitive. In addition, some small species of grassland birds favor patches of habitat far in excess of their territory size, including the Savannah (Passerculus sandwichensis), grasshopper (Ammodramus savannarum) and Henslow's (A. henslowii) sparrows, and the bobolink (Dolichonyx oryzivorus). Other species may be area sensitive as well, but the data are ambiguous. Area sensitivity among wetland birds remains unknown since virtually no studies have been based on solid methodologies. We need further research on grassland bird response to habitat that distinguishes supportable conclusions from those that may be artifactual.
Whole metagenome profiles of particulates collected from the International Space Station
Be, Nicholas A.; Avila-Herrera, Aram; Allen, Jonathan E.; ...
2017-07-17
Background The built environment of the International Space Station (ISS) is a highly specialized space in terms of both physical characteristics and habitation requirements. It is unique with respect to conditions of microgravity, exposure to space radiation, and increased carbon dioxide concentrations. Additionally, astronauts inhabit a large proportion of this environment. The microbial composition of ISS particulates has been reported; however, its functional genomics, which are pertinent due to potential impact of its constituents on human health and operational mission success, are not yet characterized. Methods This study examined the whole metagenome of ISS microbes at both species- and gene-levelmore » resolution. Air filter and dust samples from the ISS were analyzed and compared to samples collected in a terrestrial cleanroom environment. Furthermore, metagenome mining was carried out to characterize dominant, virulent, and novel microorganisms. The whole genome sequences of select cultivable strains isolated from these samples were extracted from the metagenome and compared. Results Species-level composition in the ISS was found to be largely dominated by Corynebacterium ihumii GD7, with overall microbial diversity being lower in the ISS relative to the cleanroom samples. When examining detection of microbial genes relevant to human health such as antimicrobial resistance and virulence genes, it was found that a larger number of relevant gene categories were observed in the ISS relative to the cleanroom. Strain-level cross-sample comparisons were made for Corynebacterium, Bacillus, and Aspergillus showing possible distinctions in the dominant strain between samples. Conclusion Species-level analyses demonstrated distinct differences between the ISS and cleanroom samples, indicating that the cleanroom population is not necessarily reflective of space habitation environments. Lastly, the overall population of viable microorganisms and the functional diversity inherent to this unique closed environment are of critical interest with respect to future space habitation. Observations and studies such as these will be important to evaluating the conditions required for long-term health of human occupants in such environments.« less
Whole metagenome profiles of particulates collected from the International Space Station
DOE Office of Scientific and Technical Information (OSTI.GOV)
Be, Nicholas A.; Avila-Herrera, Aram; Allen, Jonathan E.
Background The built environment of the International Space Station (ISS) is a highly specialized space in terms of both physical characteristics and habitation requirements. It is unique with respect to conditions of microgravity, exposure to space radiation, and increased carbon dioxide concentrations. Additionally, astronauts inhabit a large proportion of this environment. The microbial composition of ISS particulates has been reported; however, its functional genomics, which are pertinent due to potential impact of its constituents on human health and operational mission success, are not yet characterized. Methods This study examined the whole metagenome of ISS microbes at both species- and gene-levelmore » resolution. Air filter and dust samples from the ISS were analyzed and compared to samples collected in a terrestrial cleanroom environment. Furthermore, metagenome mining was carried out to characterize dominant, virulent, and novel microorganisms. The whole genome sequences of select cultivable strains isolated from these samples were extracted from the metagenome and compared. Results Species-level composition in the ISS was found to be largely dominated by Corynebacterium ihumii GD7, with overall microbial diversity being lower in the ISS relative to the cleanroom samples. When examining detection of microbial genes relevant to human health such as antimicrobial resistance and virulence genes, it was found that a larger number of relevant gene categories were observed in the ISS relative to the cleanroom. Strain-level cross-sample comparisons were made for Corynebacterium, Bacillus, and Aspergillus showing possible distinctions in the dominant strain between samples. Conclusion Species-level analyses demonstrated distinct differences between the ISS and cleanroom samples, indicating that the cleanroom population is not necessarily reflective of space habitation environments. Lastly, the overall population of viable microorganisms and the functional diversity inherent to this unique closed environment are of critical interest with respect to future space habitation. Observations and studies such as these will be important to evaluating the conditions required for long-term health of human occupants in such environments.« less
Whole metagenome profiles of particulates collected from the International Space Station.
Be, Nicholas A; Avila-Herrera, Aram; Allen, Jonathan E; Singh, Nitin; Checinska Sielaff, Aleksandra; Jaing, Crystal; Venkateswaran, Kasthuri
2017-07-17
The built environment of the International Space Station (ISS) is a highly specialized space in terms of both physical characteristics and habitation requirements. It is unique with respect to conditions of microgravity, exposure to space radiation, and increased carbon dioxide concentrations. Additionally, astronauts inhabit a large proportion of this environment. The microbial composition of ISS particulates has been reported; however, its functional genomics, which are pertinent due to potential impact of its constituents on human health and operational mission success, are not yet characterized. This study examined the whole metagenome of ISS microbes at both species- and gene-level resolution. Air filter and dust samples from the ISS were analyzed and compared to samples collected in a terrestrial cleanroom environment. Furthermore, metagenome mining was carried out to characterize dominant, virulent, and novel microorganisms. The whole genome sequences of select cultivable strains isolated from these samples were extracted from the metagenome and compared. Species-level composition in the ISS was found to be largely dominated by Corynebacterium ihumii GD7, with overall microbial diversity being lower in the ISS relative to the cleanroom samples. When examining detection of microbial genes relevant to human health such as antimicrobial resistance and virulence genes, it was found that a larger number of relevant gene categories were observed in the ISS relative to the cleanroom. Strain-level cross-sample comparisons were made for Corynebacterium, Bacillus, and Aspergillus showing possible distinctions in the dominant strain between samples. Species-level analyses demonstrated distinct differences between the ISS and cleanroom samples, indicating that the cleanroom population is not necessarily reflective of space habitation environments. The overall population of viable microorganisms and the functional diversity inherent to this unique closed environment are of critical interest with respect to future space habitation. Observations and studies such as these will be important to evaluating the conditions required for long-term health of human occupants in such environments.
Periscope for noninvasive two-photon imaging of murine retina in vivo
Stremplewski, Patrycjusz; Komar, Katarzyna; Palczewski, Krzysztof; Wojtkowski, Maciej; Palczewska, Grazyna
2015-01-01
Two-photon microscopy allows visualization of subcellular structures in the living animal retina. In previously reported experiments it was necessary to apply a contact lens to each subject. Extending this technology to larger animals would require fitting a custom contact lens to each animal and cumbersome placement of the living animal head on microscope stage. Here we demonstrate a new device, periscope, for coupling light energy into mouse eye and capturing emitted fluorescence. Using this periscope we obtained images of the RPE and their subcellular organelles, retinosomes, with larger field of view than previously reported. This periscope provides an interface with a commercial microscope, does not require contact lens and its design could be modified to image retina in larger animals. PMID:26417507
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj; Gage, Peter; Wright, Michael J.
2017-01-01
Mars Sample Return is our Grand Challenge for the coming decade. TPS (Thermal Protection System) nominal performance is not the key challenge. The main difficulty for designers is the need to verify unprecedented reliability for the entry system: current guidelines for prevention of backward contamination require that the probability of spores larger than 1 micron diameter escaping into the Earth environment be lower than 1 million for the entire system, and the allocation to TPS would be more stringent than that. For reference, the reliability allocation for Orion TPS is closer to 11000, and the demonstrated reliability for previous human Earth return systems was closer to 1100. Improving reliability by more than 3 orders of magnitude is a grand challenge indeed. The TPS community must embrace the possibility of new architectures that are focused on reliability above thermal performance and mass efficiency. MSR (Mars Sample Return) EEV (Earth Entry Vehicle) will be hit with MMOD (Micrometeoroid and Orbital Debris) prior to reentry. A chute-less aero-shell design which allows for self-righting shape was baselined in prior MSR studies, with the assumption that a passive system will maximize EEV robustness. Hence the aero-shell along with the TPS has to take ground impact and not break apart. System verification will require testing to establish ablative performance and thermal failure but also testing of damage from MMOD, and structural performance at ground impact. Mission requirements will demand analysis, testing and verification that are focused on establishing reliability of the design. In this proposed talk, we will focus on the grand challenge of MSR EEV TPS and the need for innovative approaches to address challenges in modeling, testing, manufacturing and verification.
McPhail, S M; O'Hara, M; Gane, E; Tonks, P; Bullock-Saxton, J; Kuys, S S
2016-06-01
The Nintendo Wii Fit integrates virtual gaming with body movement, and may be suitable as an adjunct to conventional physiotherapy following lower limb fractures. This study examined the feasibility and safety of using the Wii Fit as an adjunct to outpatient physiotherapy following lower limb fractures, and reports sample size considerations for an appropriately powered randomised trial. Ambulatory patients receiving physiotherapy following a lower limb fracture participated in this study (n=18). All participants received usual care (individual physiotherapy). The first nine participants also used the Wii Fit under the supervision of their treating clinician as an adjunct to usual care. Adverse events, fracture malunion or exacerbation of symptoms were recorded. Pain, balance and patient-reported function were assessed at baseline and discharge from physiotherapy. No adverse events were attributed to either the usual care physiotherapy or Wii Fit intervention for any patient. Overall, 15 (83%) participants completed both assessments and interventions as scheduled. For 80% power in a clinical trial, the number of complete datasets required in each group to detect a small, medium or large effect of the Wii Fit at a post-intervention assessment was calculated at 175, 63 and 25, respectively. The Nintendo Wii Fit was safe and feasible as an adjunct to ambulatory physiotherapy in this sample. When considering a likely small effect size and the 17% dropout rate observed in this study, 211 participants would be required in each clinical trial group. A larger effect size or multiple repeated measures design would require fewer participants. Copyright © 2015 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
Occurrence of organic wastewater compounds in effluent-dominated streams in Northeastern Kansas
Lee, C.J.; Rasmussen, T.J.
2006-01-01
Fifty-nine stream-water samples and 14 municipal wastewater treatment facility (WWTF) discharge samples in Johnson County, northeastern Kansas, were analyzed for 55 compounds collectively described as organic wastewater compounds (OWCs). Stream-water samples were collected upstream, in, and downstream from WWTF discharges in urban and rural areas during base-flow conditions. The effect of secondary treatment processes on OWC occurrence was evaluated by collecting eight samples from WWTF discharges using activated sludge and six from WWTFs samples using trickling filter treatment processes. Samples collected directly from WWTF discharges contained the largest concentrations of most OWCs in this study. Samples from trickling filter discharges had significantly larger concentrations of many OWCs (p-value < 0.05) compared to samples collected from activated sludge discharges. OWC concentrations decreased significantly in samples from WWTF discharges compared to stream-water samples collected from sites greater than 2000??m downstream. Upstream from WWTF discharges, base-flow samples collected in streams draining predominantly urban watersheds had significantly larger concentrations of cumulative OWCs (p-value = 0.03), caffeine (p-value = 0.01), and tris(2-butoxyethyl) phosphate (p-value < 0.01) than those collected downstream from more rural watersheds.
Phenotypic Association Analyses With Copy Number Variation in Recurrent Depressive Disorder.
Rucker, James J H; Tansey, Katherine E; Rivera, Margarita; Pinto, Dalila; Cohen-Woods, Sarah; Uher, Rudolf; Aitchison, Katherine J; Craddock, Nick; Owen, Michael J; Jones, Lisa; Jones, Ian; Korszun, Ania; Barnes, Michael R; Preisig, Martin; Mors, Ole; Maier, Wolfgang; Rice, John; Rietschel, Marcella; Holsboer, Florian; Farmer, Anne E; Craig, Ian W; Scherer, Stephen W; McGuffin, Peter; Breen, Gerome
2016-02-15
Defining the molecular genomic basis of the likelihood of developing depressive disorder is a considerable challenge. We previously associated rare, exonic deletion copy number variants (CNV) with recurrent depressive disorder (RDD). Sex chromosome abnormalities also have been observed to co-occur with RDD. In this reanalysis of our RDD dataset (N = 3106 cases; 459 screened control samples and 2699 population control samples), we further investigated the role of larger CNVs and chromosomal abnormalities in RDD and performed association analyses with clinical data derived from this dataset. We found an enrichment of Turner's syndrome among cases of depression compared with the frequency observed in a large population sample (N = 34,910) of live-born infants collected in Denmark (two-sided p = .023, odds ratio = 7.76 [95% confidence interval = 1.79-33.6]), a case of diploid/triploid mosaicism, and several cases of uniparental isodisomy. In contrast to our previous analysis, large deletion CNVs were no more frequent in cases than control samples, although deletion CNVs in cases contained more genes than control samples (two-sided p = .0002). After statistical correction for multiple comparisons, our data do not support a substantial role for CNVs in RDD, although (as has been observed in similar samples) occasional cases may harbor large variants with etiological significance. Genetic pleiotropy and sample heterogeneity suggest that very large sample sizes are required to study conclusively the role of genetic variation in mood disorders. Copyright © 2016 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
The small-scale treatability study sample exemption
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coalgate, J.
1991-01-01
In 1981, the Environmental Protection Agency (EPA) issued an interim final rule that conditionally exempted waste samples collected solely for the purpose of monitoring or testing to determine their characteristics or composition'' from RCRA Subtitle C hazardous waste regulations. This exemption (40 CFR 261.4(d)) apples to the transportation of samples between the generator and testing laboratory, temporary storage of samples at the laboratory prior to and following testing, and storage at a laboratory for specific purposes such as an enforcement action. However, the exclusion did not include large-scale samples used in treatability studies or other testing at pilot plants ormore » other experimental facilities. As a result of comments received by the EPA subsequent to the issuance of the interim final rule, the EPA reopened the comment period on the interim final rule on September 18, 1987, and specifically requested comments on whether or not the sample exclusion should be expanded to include waste samples used in small-scale treatability studies. Almost all responders commented favorably on such a proposal. As a result, the EPA issued a final rule (53 FR 27290, July 19, 1988) conditionally exempting waste samples used in small-scale treatability studies from full regulation under Subtitle C of RCRA. The question of whether or not to extend the exclusion to larger scale as proposed by the Hazardous Waste Treatment Council was deferred until a later date. This information Brief summarizes the requirements of the small-scale treatability exemption.« less
Complex organic molecules during low-mass star formation: Pilot survey results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Öberg, Karin I.; Graninger, Dawn; Lauck, Trish, E-mail: koberg@cfa.harvard.edu
Complex organic molecules (COMs) are known to be abundant toward some low-mass young stellar objects (YSOs), but how these detections relate to typical COM abundance are not yet understood. We aim to constrain the frequency distribution of COMs during low-mass star formation, beginning with this pilot survey of COM lines toward six embedded YSOs using the IRAM 30 m Telescope. The sample was selected from the Spitzer c2d ice sample and covers a range of ice abundances. We detect multiple COMs, including CH{sub 3}CN, toward two of the YSOs, and tentatively toward a third. Abundances with respect to CH{sub 3}OHmore » vary between 0.7% and 10%. This sample is combined with previous COM observations and upper limits to obtain a frequency distributions of CH{sub 3}CN, HCOOCH{sub 3}, CH{sub 3}OCH{sub 3}, and CH{sub 3}CHO. We find that for all molecules more than 50% of the sample have detections or upper limits of 1%-10% with respect to CH{sub 3}OH. Moderate abundances of COMs thus appear common during the early stages of low-mass star formation. A larger sample is required, however, to quantify the COM distributions, as well as to constrain the origins of observed variations across the sample.« less
Chaemfa, Chakra; Wild, Edward; Davison, Brian; Barber, Jonathan L; Jones, Kevin C
2009-06-01
Polyurethane foam disks are a cheap and versatile tool for sampling persistent organic pollutants (POPs) from the air in ambient, occupational and indoor settings. This study provides important background information on the ways in which the performance of these commonly used passive air samplers may be influenced by the key environmental variables of wind speed and aerosol entrapment. Studies were performed in the field, a wind tunnel and with microscopy techniques, to investigate deployment conditions and foam density influence on gas phase sampling rates (not obtained in this study) and aerosol trapping. The study showed: wind speed inside the sampler is greater on the upper side of the sampling disk than the lower side and tethered samplers have higher wind speeds across the upper and lower surfaces of the foam disk at a wind speed > or = 4 m/s; particles are trapped on the foam surface and within the body of the foam disk; fine (<1 um) particles can form clusters of larger size inside the foam matrix. Whilst primarily designed to sample gas phase POPs, entrapment of particles ensures some 'sampling' of particle bound POPs species, such as higher molecular weight PAHs and PCDD/Fs. Further work is required to investigate how quantitative such entrapment or 'sampling' is under different ambient conditions, and with different aerosol sizes and types.
Genotype Imputation with Millions of Reference Samples.
Browning, Brian L; Browning, Sharon R
2016-01-07
We present a genotype imputation method that scales to millions of reference samples. The imputation method, based on the Li and Stephens model and implemented in Beagle v.4.1, is parallelized and memory efficient, making it well suited to multi-core computer processors. It achieves fast, accurate, and memory-efficient genotype imputation by restricting the probability model to markers that are genotyped in the target samples and by performing linear interpolation to impute ungenotyped variants. We compare Beagle v.4.1 with Impute2 and Minimac3 by using 1000 Genomes Project data, UK10K Project data, and simulated data. All three methods have similar accuracy but different memory requirements and different computation times. When imputing 10 Mb of sequence data from 50,000 reference samples, Beagle's throughput was more than 100× greater than Impute2's throughput on our computer servers. When imputing 10 Mb of sequence data from 200,000 reference samples in VCF format, Minimac3 consumed 26× more memory per computational thread and 15× more CPU time than Beagle. We demonstrate that Beagle v.4.1 scales to much larger reference panels by performing imputation from a simulated reference panel having 5 million samples and a mean marker density of one marker per four base pairs. Copyright © 2016 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Asai, Atsushi; Ohnishi, Motoki; Nishigaki, Etsuyo; Sekimoto, Miho; Fukuhara, Shunichi; Fukui, Tsuguya
2002-01-09
The purpose of this study is to explore laypersons' attitudes toward the use of archived (existing) materials such as medical records and biological samples and to compare them with the attitudes of physicians who are involved in medical research. Three focus group interviews were conducted, in which seven Japanese male members of the general public, seven female members of the general public and seven physicians participated. It was revealed that the lay public expressed diverse attitudes towards the use of archived information and samples without informed consent. Protecting a subject's privacy, maintaining confidentiality, and communicating the outcomes of studies to research subjects were regarded as essential preconditions if researchers were to have access to archived information and samples used for research without the specific informed consent of the subjects who provided the material. Although participating physicians thought that some kind of prior permission from subjects was desirable, they pointed out the difficulties involved in obtaining individual informed consent in each case. The present preliminary study indicates that the lay public and medical professionals may have different attitudes towards the use of archived information and samples without specific informed consent. This hypothesis, however, is derived from our focus groups interviews, and requires validation through research using a larger sample.
A search for extraterrestrial amino acids in carbonaceous Antarctic micrometeorites
NASA Technical Reports Server (NTRS)
Brinton, K. L.; Engrand, C.; Glavin, D. P.; Bada, J. L.; Maurette, M.
1998-01-01
Antarctic micrometeorites (AMMs) in the 100-400 microns size range are the dominant mass fraction of extraterrestrial material accreted by the Earth today. A high performance liquid chromatography (HPLC) based technique exploited at the limits of sensitivity has been used to search for the extraterrestrial amino acids alpha-aminoisobutyric acid (AIB) and isovaline in AMMs. Five samples, each containing about 30 to 35 grains, were analyzed. All the samples possess a terrestrial amino acid component, indicated by the excess of the L-enantiomers of common protein amino acids. In only one sample (A91) was AIB found to be present at a level significantly above the background blanks. The concentration of AIB (approximately 280 ppm), and the AIB/isovaline ratio (> or = 10), in this sample are both much higher than in CM chondrites. The apparently large variation in the AIB concentrations of the samples suggests that AIB may be concentrated in rare subset of micrometeorites. Because the AIB/isovaline ratio in sample A91 is much larger than in CM chondrites, the synthesis of amino acids in the micrometeorite parent bodies might have involved a different process requiring an HCN-rich environment, such as that found in comets. If the present day characteristics of the meteorite and micrometeorite fluxes can be extrapolated back in time, then the flux of large carbonaceous micrometeorites could have contributed to the inventory of prebiotic molecules on the early Earth.
A search for extraterrestrial amino acids in carbonaceous Antarctic micrometeorites.
Brinton, K L; Engrand, C; Glavin, D P; Bada, J L; Maurette, M
1998-10-01
Antarctic micrometeorites (AMMs) in the 100-400 microns size range are the dominant mass fraction of extraterrestrial material accreted by the Earth today. A high performance liquid chromatography (HPLC) based technique exploited at the limits of sensitivity has been used to search for the extraterrestrial amino acids alpha-aminoisobutyric acid (AIB) and isovaline in AMMs. Five samples, each containing about 30 to 35 grains, were analyzed. All the samples possess a terrestrial amino acid component, indicated by the excess of the L-enantiomers of common protein amino acids. In only one sample (A91) was AIB found to be present at a level significantly above the background blanks. The concentration of AIB (approximately 280 ppm), and the AIB/isovaline ratio (> or = 10), in this sample are both much higher than in CM chondrites. The apparently large variation in the AIB concentrations of the samples suggests that AIB may be concentrated in rare subset of micrometeorites. Because the AIB/isovaline ratio in sample A91 is much larger than in CM chondrites, the synthesis of amino acids in the micrometeorite parent bodies might have involved a different process requiring an HCN-rich environment, such as that found in comets. If the present day characteristics of the meteorite and micrometeorite fluxes can be extrapolated back in time, then the flux of large carbonaceous micrometeorites could have contributed to the inventory of prebiotic molecules on the early Earth.
A survey of physician efficacy requirements to plan clinical trials.
Oremus, Mark; Collet, Jean-Paul; Corcos, Jacques; Shapiro, Stanley H
2002-12-01
Eliciting physician efficacy requirements for utilizing medical treatments can be a useful means of helping plan a clinical trial. Efficacy requirements were studied for female stress urinary incontinence, where an experimental treatment (collagen injection) was compared to the standard therapy (surgery). A self-administered questionnaire was sent to 223 North American urologists, gynecologists, and urogynecologists. An interviewer also administered a similar questionnaire to 20 other clinician-specialists. The response rate for the self-administered questionnaire was 48.4% (108/223). All 20 clinician-specialists who were approached for an interview consented. On average, respondents to the self-administered questionnaire indicated they would consider using collagen as the first line treatment if the absolute reduction in efficacy of collagen versus surgery was no larger than 23%. The corresponding result for the interview-questionnaire was 22%. Efficacy was measured as patient satisfaction with treatment. In the opinion of the physicians, surgery would remain the standard therapy if the reduction was greater than 34% (self-administered questionnaire), or 37% (interviewer-administered questionnaire). The elicitation of physician efficacy requirements provides an idea of the treatment effect that would be needed for a clinical trial to have an impact on medical practice. These requirements can be used to calculate a relevant sample size.
The large satellite program of ESA and its relevance for broadcast missions
NASA Astrophysics Data System (ADS)
Fromm, H.-H.; Herdan, B. L.
1981-03-01
In an investigation of the market prospects and payload requirements of future communications satellites, it was concluded that during the next 15 years many space missions will demand larger satellite platforms than those currently available. These platforms will be needed in connection with direct-broadcasting satellites, satellites required to enhance capacities in the case of traditional services, and satellites employed to introduce new types of satellite-based communications operating with small terminals. Most of the larger satellites would require the Ariane III capability, corresponding to about 1400 kg satellite mass in geostationary orbit. Attention is given to L-SAT platform capabilities and broadcast payload requirements, taking into account a European direct-broadcast satellite and Canadian direct-broadcast missions.
NASA Astrophysics Data System (ADS)
Dietrich, Volker; Hartmann, Peter; Kerz, Franca
2015-03-01
Digital cameras are present everywhere in our daily life. Science, business or private life cannot be imagined without digital images. The quality of an image is often rated by its color rendering. In order to obtain a correct color recognition, a near infrared cut (IRC-) filter must be used to alter the sensitivity of imaging sensor. Increasing requirements related to color balance and larger angle of incidence (AOI) enforced the use of new materials as the e.g. BG6X series which substitutes interference coated filters on D263 thin glass. Although the optical properties are the major design criteria, devices have to withstand numerous environmental conditions during use and manufacturing - as e.g. temperature change, humidity, and mechanical shock, as wells as mechanical stress. The new materials show different behavior with respect to all these aspects. They are usually more sensitive against these requirements to a larger or smaller extent. Mechanical strength is especially different. Reliable strength data are of major interest for mobile phone camera applications. As bending strength of a glass component depends not only upon the material itself, but mainly on the surface treatment and test conditions, a single number for the strength might be misleading if the conditions of the test and the samples are not described precisely,. Therefore, Schott started investigations upon the bending strength data of various IRC-filter materials. Different test methods were used to obtain statistical relevant data.
Comparison of complex effluent treatability in different bench scale microbial electrolysis cells.
Ullery, Mark L; Logan, Bruce E
2014-10-01
A range of wastewaters and substrates were examined using mini microbial electrolysis cells (mini MECs) to see if they could be used to predict the performance of larger-scale cube MECs. COD removals and coulombic efficiencies corresponded well between the two reactor designs for individual samples, with 66-92% of COD removed for all samples. Current generation was consistent between the reactor types for acetate (AC) and fermentation effluent (FE) samples, but less consistent with industrial (IW) and domestic wastewaters (DW). Hydrogen was recovered from all samples in cube MECs, but gas composition and volume varied significantly between samples. Evidence for direct conversion of substrate to methane was observed with two of the industrial wastewater samples (IW-1 and IW-3). Overall, mini MECs provided organic treatment data that corresponded well with larger scale reactor results, and therefore it was concluded that they can be a useful platform for screening wastewater sources. Copyright © 2014 Elsevier Ltd. All rights reserved.
Wild, David M; Fisher, John D; Kim, Soo G; Ferrick, Kevin J; Gross, Jay N; Palma, Eugen C
2004-11-01
The size of pacemakers and implantable cardioverter defibrillators (ICDs) has been diminishing progressively. If two devices are otherwise identical in components, features and technology, the one with a larger battery should have a longer service life. Therefore, patients who receive smaller devices may require more frequent surgery to replace the devices. It is uncertain whether this tradeoff for smaller size is desired by patients. We surveyed 156 patients to determine whether patients prefer a larger, longer-lasting device, or a smaller device that is less noticeable but requires more frequent surgery. The effects of subgroups were evaluated; these included body habitus, age, gender, and patients seen at time of pulse generator replacement (PGR), initial implant, or follow-up. Among 156 patients surveyed, 151 expressed a preference. Of these, 90.1% preferred the larger device and 9.9% the smaller device (P <0.0001). Among thin patients, 79.5% preferred a larger device. Ninety percent of males and 89.2% of females selected the larger device. Among younger patients (< or =72 years), 89.6% preferred the larger device, as did 90.5% of older patients (>72 years). Of patients undergoing PGR or initial implants, 95% favored the larger device, as did 86% of patients presenting for follow-up. The vast majority of patients prefer a larger device to reduce the number of potential replacement operations. This preference crosses the spectrum of those with a previously implanted device, those undergoing initial implants, those returning for routine follow-up, and patients of various ages, gender, and habitus.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Correction (FPC). The State agency must increase the resulting number by 30 percent to allow for attrition... 30 percent to allow for attrition, but the sample size must not be larger than the number of youth...
Code of Federal Regulations, 2013 CFR
2013-10-01
... Correction (FPC). The State agency must increase the resulting number by 30 percent to allow for attrition... 30 percent to allow for attrition, but the sample size must not be larger than the number of youth...
Code of Federal Regulations, 2014 CFR
2014-10-01
... Correction (FPC). The State agency must increase the resulting number by 30 percent to allow for attrition... 30 percent to allow for attrition, but the sample size must not be larger than the number of youth...
Simulation on Poisson and negative binomial models of count road accident modeling
NASA Astrophysics Data System (ADS)
Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.
2016-11-01
Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.
Nelson, Sarah C.; Stilp, Adrienne M.; Papanicolaou, George J.; Taylor, Kent D.; Rotter, Jerome I.; Thornton, Timothy A.; Laurie, Cathy C.
2016-01-01
Imputation is commonly used in genome-wide association studies to expand the set of genetic variants available for analysis. Larger and more diverse reference panels, such as the final Phase 3 of the 1000 Genomes Project, hold promise for improving imputation accuracy in genetically diverse populations such as Hispanics/Latinos in the USA. Here, we sought to empirically evaluate imputation accuracy when imputing to a 1000 Genomes Phase 3 versus a Phase 1 reference, using participants from the Hispanic Community Health Study/Study of Latinos. Our assessments included calculating the correlation between imputed and observed allelic dosage in a subset of samples genotyped on a supplemental array. We observed that the Phase 3 reference yielded higher accuracy at rare variants, but that the two reference panels were comparable at common variants. At a sample level, the Phase 3 reference improved imputation accuracy in Hispanic/Latino samples from the Caribbean more than for Mainland samples, which we attribute primarily to the additional reference panel samples available in Phase 3. We conclude that a 1000 Genomes Project Phase 3 reference panel can yield improved imputation accuracy compared with Phase 1, particularly for rare variants and for samples of certain genetic ancestry compositions. Our findings can inform imputation design for other genome-wide association studies of participants with diverse ancestries, especially as larger and more diverse reference panels continue to become available. PMID:27346520
Nonanalytic Laboratory Automation: A Quarter Century of Progress.
Hawker, Charles D
2017-06-01
Clinical laboratory automation has blossomed since the 1989 AACC meeting, at which Dr. Masahide Sasaki first showed a western audience what his laboratory had implemented. Many diagnostics and other vendors are now offering a variety of automated options for laboratories of all sizes. Replacing manual processing and handling procedures with automation was embraced by the laboratory community because of the obvious benefits of labor savings and improvement in turnaround time and quality. Automation was also embraced by the diagnostics vendors who saw automation as a means of incorporating the analyzers purchased by their customers into larger systems in which the benefits of automation were integrated to the analyzers.This report reviews the options that are available to laboratory customers. These options include so called task-targeted automation-modules that range from single function devices that automate single tasks (e.g., decapping or aliquoting) to multifunction workstations that incorporate several of the functions of a laboratory sample processing department. The options also include total laboratory automation systems that use conveyors to link sample processing functions to analyzers and often include postanalytical features such as refrigerated storage and sample retrieval.Most importantly, this report reviews a recommended process for evaluating the need for new automation and for identifying the specific requirements of a laboratory and developing solutions that can meet those requirements. The report also discusses some of the practical considerations facing a laboratory in a new implementation and reviews the concept of machine vision to replace human inspections. © 2017 American Association for Clinical Chemistry.
Percussive Augmenter of Rotary Drills (PARoD)
NASA Technical Reports Server (NTRS)
Badescu, Mircea; Hasenoehrl, Jennifer; Bar-Cohen, Yoseph; Sherrit, Stewart; Bao, Xiaoqi; Chang, Zensheu; Ostlund, Patrick; Aldrich, Jack
2013-01-01
Increasingly, NASA exploration mission objectives include sample acquisition tasks for in-situ analysis or for potential sample return to Earth. To address the requirements for samplers that could be operated at the conditions of the various bodies in the solar system, a piezoelectric actuated percussive sampling device was developed that requires low preload (as low as 10 N) which is important for operation at low gravity. This device can be made as light as 400 g, can be operated using low average power, and can drill rocks as hard as basalt. Significant improvement of the penetration rate was achieved by augmenting the hammering action by rotation and use of a fluted bit to provide effective cuttings removal. Generally, hammering is effective in fracturing drilled media while rotation of fluted bits is effective in cuttings removal. To benefit from these two actions, a novel configuration of a percussive mechanism was developed to produce an augmenter of rotary drills. The device was called Percussive Augmenter of Rotary Drills (PARoD). A breadboard PARoD was developed with a 6.4 mm (0.25 in) diameter bit and was demonstrated to increase the drilling rate of rotation alone by 1.5 to over 10 times. The test results of this configuration were published in a previous publication. Further, a larger PARoD breadboard with a 50.8 mm (2.0 in) diameter bit was developed and tested. This paper presents the design, analysis and test results of the large diameter bit percussive augmenter.
2011-01-01
To obtain approval for the use vertebrate animals in research, an investigator must assure an ethics committee that the proposed number of animals is the minimum necessary to achieve a scientific goal. How does an investigator make that assurance? A power analysis is most accurate when the outcome is known before the study, which it rarely is. A ‘pilot study’ is appropriate only when the number of animals used is a tiny fraction of the numbers that will be invested in the main study because the data for the pilot animals cannot legitimately be used again in the main study without increasing the rate of type I errors (false discovery). Traditional significance testing requires the investigator to determine the final sample size before any data are collected and then to delay analysis of any of the data until all of the data are final. An investigator often learns at that point either that the sample size was larger than necessary or too small to achieve significance. Subjects cannot be added at this point in the study without increasing type I errors. In addition, journal reviewers may require more replications in quantitative studies than are truly necessary. Sequential stopping rules used with traditional significance tests allow incremental accumulation of data on a biomedical research problem so that significance, replicability, and use of a minimal number of animals can be assured without increasing type I errors. PMID:21838970
Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris
2015-12-30
Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.
A neural algorithm for the non-uniform and adaptive sampling of biomedical data.
Mesin, Luca
2016-04-01
Body sensors are finding increasing applications in the self-monitoring for health-care and in the remote surveillance of sensitive people. The physiological data to be sampled can be non-stationary, with bursts of high amplitude and frequency content providing most information. Such data could be sampled efficiently with a non-uniform schedule that increases the sampling rate only during activity bursts. A real time and adaptive algorithm is proposed to select the sampling rate, in order to reduce the number of measured samples, but still recording the main information. The algorithm is based on a neural network which predicts the subsequent samples and their uncertainties, requiring a measurement only when the risk of the prediction is larger than a selectable threshold. Four examples of application to biomedical data are discussed: electromyogram, electrocardiogram, electroencephalogram, and body acceleration. Sampling rates are reduced under the Nyquist limit, still preserving an accurate representation of the data and of their power spectral densities (PSD). For example, sampling at 60% of the Nyquist frequency, the percentage average rectified errors in estimating the signals are on the order of 10% and the PSD is fairly represented, until the highest frequencies. The method outperforms both uniform sampling and compressive sensing applied to the same data. The discussed method allows to go beyond Nyquist limit, still preserving the information content of non-stationary biomedical signals. It could find applications in body sensor networks to lower the number of wireless communications (saving sensor power) and to reduce the occupation of memory. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Holdaway, Daniel; Yang, Yuekui
2016-01-01
This is the second part of a study on how temporal sampling frequency affects satellite retrievals in support of the Deep Space Climate Observatory (DSCOVR) mission. Continuing from Part 1, which looked at Earth's radiation budget, this paper presents the effect of sampling frequency on DSCOVR-derived cloud fraction. The output from NASA's Goddard Earth Observing System version 5 (GEOS-5) Nature Run is used as the "truth". The effect of temporal resolution on potential DSCOVR observations is assessed by subsampling the full Nature Run data. A set of metrics, including uncertainty and absolute error in the subsampled time series, correlation between the original and the subsamples, and Fourier analysis have been used for this study. Results show that, for a given sampling frequency, the uncertainties in the annual mean cloud fraction of the sunlit half of the Earth are larger over land than over ocean. Analysis of correlation coefficients between the subsamples and the original time series demonstrates that even though sampling at certain longer time intervals may not increase the uncertainty in the mean, the subsampled time series is further and further away from the "truth" as the sampling interval becomes larger and larger. Fourier analysis shows that the simulated DSCOVR cloud fraction has underlying periodical features at certain time intervals, such as 8, 12, and 24 h. If the data is subsampled at these frequencies, the uncertainties in the mean cloud fraction are higher. These results provide helpful insights for the DSCOVR temporal sampling strategy.
Lusher, Amy L.; Tirelli, Valentina; O’Connor, Ian; Officer, Rick
2015-01-01
Plastic, as a form of marine litter, is found in varying quantities and sizes around the globe from surface waters to deep-sea sediments. Identifying patterns of microplastic distribution will benefit an understanding of the scale of their potential effect on the environment and organisms. As sea ice extent is reducing in the Arctic, heightened shipping and fishing activity may increase marine pollution in the area. Microplastics may enter the region following ocean transport and local input, although baseline contamination measurements are still required. Here we present the first study of microplastics in Arctic waters, south and southwest of Svalbard, Norway. Microplastics were found in surface (top 16 cm) and sub-surface (6 m depth) samples using two independent techniques. Origins and pathways bringing microplastic to the Arctic remain unclear. Particle composition (95% fibres) suggests they may either result from the breakdown of larger items (transported over large distances by prevailing currents, or derived from local vessel activity), or input in sewage and wastewater from coastal areas. Concurrent observations of high zooplankton abundance suggest a high probability for marine biota to encounter microplastics and a potential for trophic interactions. Further research is required to understand the effects of microplastic-biota interaction within this productive environment. PMID:26446348
NASA Astrophysics Data System (ADS)
Lusher, Amy L.; Tirelli, Valentina; O'Connor, Ian; Officer, Rick
2015-10-01
Plastic, as a form of marine litter, is found in varying quantities and sizes around the globe from surface waters to deep-sea sediments. Identifying patterns of microplastic distribution will benefit an understanding of the scale of their potential effect on the environment and organisms. As sea ice extent is reducing in the Arctic, heightened shipping and fishing activity may increase marine pollution in the area. Microplastics may enter the region following ocean transport and local input, although baseline contamination measurements are still required. Here we present the first study of microplastics in Arctic waters, south and southwest of Svalbard, Norway. Microplastics were found in surface (top 16 cm) and sub-surface (6 m depth) samples using two independent techniques. Origins and pathways bringing microplastic to the Arctic remain unclear. Particle composition (95% fibres) suggests they may either result from the breakdown of larger items (transported over large distances by prevailing currents, or derived from local vessel activity), or input in sewage and wastewater from coastal areas. Concurrent observations of high zooplankton abundance suggest a high probability for marine biota to encounter microplastics and a potential for trophic interactions. Further research is required to understand the effects of microplastic-biota interaction within this productive environment.
Resonance Raman Spectroscopy of Extreme Nanowires and Other 1D Systems
Smith, David C.; Spencer, Joseph H.; Sloan, Jeremy; McDonnell, Liam P.; Trewhitt, Harrison; Kashtiban, Reza J.; Faulques, Eric
2016-01-01
This paper briefly describes how nanowires with diameters corresponding to 1 to 5 atoms can be produced by melting a range of inorganic solids in the presence of carbon nanotubes. These nanowires are extreme in the sense that they are the limit of miniaturization of nanowires and their behavior is not always a simple extrapolation of the behavior of larger nanowires as their diameter decreases. The paper then describes the methods required to obtain Raman spectra from extreme nanowires and the fact that due to the van Hove singularities that 1D systems exhibit in their optical density of states, that determining the correct choice of photon excitation energy is critical. It describes the techniques required to determine the photon energy dependence of the resonances observed in Raman spectroscopy of 1D systems and in particular how to obtain measurements of Raman cross-sections with better than 8% noise and measure the variation in the resonance as a function of sample temperature. The paper describes the importance of ensuring that the Raman scattering is linearly proportional to the intensity of the laser excitation intensity. It also describes how to use the polarization dependence of the Raman scattering to separate Raman scattering of the encapsulated 1D systems from those of other extraneous components in any sample. PMID:27168195
NASA Astrophysics Data System (ADS)
Yokoyama, Yusuke; Miyairi, Yousuke; Matsuzaki, Hiroyuki; Tsunomori, Fumiaki
2007-06-01
Availability of an effective graphitization system is essential for the successful operation of an AMS laboratory for radiocarbon measurements. We have set up a graphitization system consisting of metal vacuum lines for cleaning CO2 sample gas which is then converted to graphite. CO2 gas from a carbonate sample is produced in vacuum in a test tube by injecting concentrated phosphoric acid. The tube is placed into a heated metal block to accelerate dissolution. However, we have observed systematic differences in the time required to convert the CO2 gas to graphite under a hydrogen atmosphere, from less than 3 h to over 10 h. We have conducted a series of experiments including background measurements and yield measurements to monitor secondary carbon contamination and changes in isotopic fractionation. All of the tests show that the carbon isotope ratios remain unaffected by the duration of the process. We also used a quadrupole mass spectrometer (QMS) to identify possible contaminant gases. Contaminant peaks were identified at high mass (larger than 60) only for long duration experiments. This suggests a possible reaction between the rubber cap and acid fumes producing a contaminant gas that impeded the reduction of CO2.
Lusher, Amy L; Tirelli, Valentina; O'Connor, Ian; Officer, Rick
2015-10-08
Plastic, as a form of marine litter, is found in varying quantities and sizes around the globe from surface waters to deep-sea sediments. Identifying patterns of microplastic distribution will benefit an understanding of the scale of their potential effect on the environment and organisms. As sea ice extent is reducing in the Arctic, heightened shipping and fishing activity may increase marine pollution in the area. Microplastics may enter the region following ocean transport and local input, although baseline contamination measurements are still required. Here we present the first study of microplastics in Arctic waters, south and southwest of Svalbard, Norway. Microplastics were found in surface (top 16 cm) and sub-surface (6 m depth) samples using two independent techniques. Origins and pathways bringing microplastic to the Arctic remain unclear. Particle composition (95% fibres) suggests they may either result from the breakdown of larger items (transported over large distances by prevailing currents, or derived from local vessel activity), or input in sewage and wastewater from coastal areas. Concurrent observations of high zooplankton abundance suggest a high probability for marine biota to encounter microplastics and a potential for trophic interactions. Further research is required to understand the effects of microplastic-biota interaction within this productive environment.
Dana Mitchell
2009-01-01
Increased use of forest fuel requires larger and larger procurement areas. Inclusion of stump material within the shorter distances could make this unusual source of biomass more economical to harvest. Land clearing activities are also helping to raise interest in stump harvesting. Processing stump material for biomass is an alternative...
Effect of abdominopelvic abscess drain size on drainage time and probability of occlusion
Rotman, Jessica A.; Getrajdman, George I.; Maybody, Majid; Erinjeri, Joseph P.; Yarmohammadi, Hooman; Sofocleous, Constantinos T.; Solomon, Stephen B.; Boas, F. Edward
2016-01-01
Background The purpose of this study is to determine whether larger abdominopelvic abscess drains reduce the time required for abscess resolution, or the probability of tube occlusion. Methods 144 consecutive patients who underwent abscess drainage at a single institution were reviewed retrospectively. Results: Larger initial drain size did not reduce drainage time, drain occlusion, or drain exchanges (p>0.05). Subgroup analysis did not find any type of collection that benefitted from larger drains. A multivariate model predicting drainage time showed that large collections (>200 ml) required 16 days longer drainage time than small collections (<50 ml). Collections with a fistula to bowel required 17 days longer drainage time than collections without a fistula. Initial drain size and the viscosity of the fluid in the collection had no significant effect on drainage time in the multivariate model. Conclusions 8 F drains are adequate for initial drainage of most serous and serosanguineous collections. 10 F drains are adequate for initial drainage of most purulent or bloody collections. PMID:27634422
Geostationary platform systems concepts definition study. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
1980-01-01
The results of a geostationary platform concept analysis are summarized. Mission and payloads definition, concept selection, the requirements of an experimental platform, supporting research and technology, and the Space Transportation System interface requirements are addressed. It is concluded that platforms represent a logical extension of current trends toward larger, more complex, multifrequency satellites. Geostationary platforms offer significant cost savings compared to individual satellites, with the majority of these economies being realized with single Shuttle launched platforms. Further cost savings can be realized, however, by having larger platforms. Platforms accommodating communications equipment that operates at multiple frequencies and which provide larger scale frequency reuse through the use of large aperture multibeam antennas and onboard switching maximize the useful capacity of the orbital arc and frequency spectrum. Projections of market demand indicate that such conservation measures are clearly essential if orderly growth is to be provided for. In addition, it is pointed out that a NASA experimental platform is required to demonstrate the technologies necessary for operational geostationary platforms of the 1990's.
Overy, Catherine; Booth, George H; Blunt, N S; Shepherd, James J; Cleland, Deidre; Alavi, Ali
2014-12-28
Properties that are necessarily formulated within pure (symmetric) expectation values are difficult to calculate for projector quantum Monte Carlo approaches, but are critical in order to compute many of the important observable properties of electronic systems. Here, we investigate an approach for the sampling of unbiased reduced density matrices within the full configuration interaction quantum Monte Carlo dynamic, which requires only small computational overheads. This is achieved via an independent replica population of walkers in the dynamic, sampled alongside the original population. The resulting reduced density matrices are free from systematic error (beyond those present via constraints on the dynamic itself) and can be used to compute a variety of expectation values and properties, with rapid convergence to an exact limit. A quasi-variational energy estimate derived from these density matrices is proposed as an accurate alternative to the projected estimator for multiconfigurational wavefunctions, while its variational property could potentially lend itself to accurate extrapolation approaches in larger systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Yuqi; Wang, Jinan; Shao, Qiang, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn
2015-03-28
The application of temperature replica exchange molecular dynamics (REMD) simulation on protein motion is limited by its huge requirement of computational resource, particularly when explicit solvent model is implemented. In the previous study, we developed a velocity-scaling optimized hybrid explicit/implicit solvent REMD method with the hope to reduce the temperature (replica) number on the premise of maintaining high sampling efficiency. In this study, we utilized this method to characterize and energetically identify the conformational transition pathway of a protein model, the N-terminal domain of calmodulin. In comparison to the standard explicit solvent REMD simulation, the hybrid REMD is much lessmore » computationally expensive but, meanwhile, gives accurate evaluation of the structural and thermodynamic properties of the conformational transition which are in well agreement with the standard REMD simulation. Therefore, the hybrid REMD could highly increase the computational efficiency and thus expand the application of REMD simulation to larger-size protein systems.« less
Mitchell, John T; McIntyre, Elizabeth M; English, Joseph S; Dennis, Michelle F; Beckham, Jean C; Kollins, Scott H
2017-11-01
Mindfulness meditation training is garnering increasing empirical interest as an intervention for ADHD in adulthood, although no studies of mindfulness as a standalone treatment have included a sample composed entirely of adults with ADHD or a comparison group. The aim of this study was to assess the feasibility, acceptability, and preliminary efficacy of mindfulness meditation for ADHD, executive functioning (EF), and emotion dysregulation symptoms in an adult ADHD sample. Adults with ADHD were stratified by ADHD medication status and otherwise randomized into an 8-week group-based mindfulness treatment ( n = 11) or waitlist group ( n = 9). Treatment feasibility and acceptability were positive. In addition, self-reported ADHD and EF symptoms (assessed in the laboratory and ecological momentary assessment), clinician ratings of ADHD and EF symptoms, and self-reported emotion dysregulation improved for the treatment group relative to the waitlist group over time with large effect sizes. Improvement was not observed for EF tasks. Findings support preliminary treatment efficacy, though require larger trials.
New evidence of the effects of education on health in the US: compulsory schooling laws revisited.
Fletcher, Jason M
2015-02-01
Estimating the effects of education on health and mortality has been the subject of intense debate and competing findings and summaries. The original Lleras-Muney (2005) methods utilizing state compulsory schooling laws as instrumental variables for completed education and US data to establish effects of education on mortality have been extended to several countries, with mixed and often null findings. However, additional US studies have lagged behind due to small samples and/or lack of mortality information in many available datasets. This paper uses a large, novel survey from the AARP on several hundred thousand respondents to present new evidence of the effects of education on a variety of health outcomes. Results suggest that education may have a role in improving several dimensions of health, such as self reports, cardiovascular outcomes, and weight outcomes. Other results appear underpowered, suggesting that further use of this methodology may require even larger, and potentially unattainable, sample sizes in the US. Copyright © 2014 Elsevier Ltd. All rights reserved.
Evaluation of tools for highly variable gene discovery from single-cell RNA-seq data.
Yip, Shun H; Sham, Pak Chung; Wang, Junwen
2018-02-21
Traditional RNA sequencing (RNA-seq) allows the detection of gene expression variations between two or more cell populations through differentially expressed gene (DEG) analysis. However, genes that contribute to cell-to-cell differences are not discoverable with RNA-seq because RNA-seq samples are obtained from a mixture of cells. Single-cell RNA-seq (scRNA-seq) allows the detection of gene expression in each cell. With scRNA-seq, highly variable gene (HVG) discovery allows the detection of genes that contribute strongly to cell-to-cell variation within a homogeneous cell population, such as a population of embryonic stem cells. This analysis is implemented in many software packages. In this study, we compare seven HVG methods from six software packages, including BASiCS, Brennecke, scLVM, scran, scVEGs and Seurat. Our results demonstrate that reproducibility in HVG analysis requires a larger sample size than DEG analysis. Discrepancies between methods and potential issues in these tools are discussed and recommendations are made.
Chan, Daniel Ky; Sherrington, Cathie; Naganathan, Vasi; Xu, Ying Hua; Chen, Jack; Ko, Anita; Kneebone, Ian; Cumming, Robert
2018-06-01
Falls in hospital are common and up to 70% result in injury, leading to increased length of stay and accounting for 10% of patient safety-related deaths. Yet, high-quality evidence guiding best practice is lacking. Fall prevention strategies have worked in some trials but not in others. Differences in study setting (acute, subacute, rehabilitation) and sampling of patients (cognitively intact or impaired) may explain the difference in results. This article discusses these important issues and describes the strategies to prevent falls in the acute hospital setting we have studied, which engage the cognitively impaired who are more likely to fall. We have used video clips rather than verbal instruction to educate patients, and are optimistic that this approach may work. We have also explored the option of co-locating high fall risk patients in a close observation room for supervision, with promising results. Further studies, using larger sample sizes are required to confirm our findings. © 2018 AJA Inc.
Mars Sample Return and Flight Test of a Small Bimodal Nuclear Rocket and ISRU Plant
NASA Technical Reports Server (NTRS)
George, Jeffrey A.; Wolinsky, Jason J.; Bilyeu, Michael B.; Scott, John H.
2014-01-01
A combined Nuclear Thermal Rocket (NTR) flight test and Mars Sample Return mission (MSR) is explored as a means of "jump-starting" NTR development. Development of a small-scale engine with relevant fuel and performance could more affordably and quickly "pathfind" the way to larger scale engines. A flight test with subsequent inflight postirradiation evaluation may also be more affordable and expedient compared to ground testing and associated facilities and approvals. Mission trades and a reference scenario based upon a single expendable launch vehicle (ELV) are discussed. A novel "single stack" spacecraft/lander/ascent vehicle concept is described configured around a "top-mounted" downward firing NTR, reusable common tank, and "bottom-mount" bus, payload and landing gear. Requirements for a hypothetical NTR engine are described that would be capable of direct thermal propulsion with either hydrogen or methane propellant, and modest electrical power generation during cruise and Mars surface insitu resource utilization (ISRU) propellant production.
Inferring HIV Escape Rates from Multi-Locus Genotype Data
Kessinger, Taylor A.; Perelson, Alan S.; Neher, Richard A.
2013-09-03
Cytotoxic T-lymphocytes (CTLs) recognize viral protein fragments displayed by major histocompatibility complex molecules on the surface of virally infected cells and generate an anti-viral response that can kill the infected cells. Virus variants whose protein fragments are not efficiently presented on infected cells or whose fragments are presented but not recognized by CTLs therefore have a competitive advantage and spread rapidly through the population. We present a method that allows a more robust estimation of these escape rates from serially sampled sequence data. The proposed method accounts for competition between multiple escapes by explicitly modeling the accumulation of escape mutationsmore » and the stochastic effects of rare multiple mutants. Applying our method to serially sampled HIV sequence data, we estimate rates of HIV escape that are substantially larger than those previously reported. The method can be extended to complex escapes that require compensatory mutations. We expect our method to be applicable in other contexts such as cancer evolution where time series data is also available.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Overy, Catherine; Blunt, N. S.; Shepherd, James J.
2014-12-28
Properties that are necessarily formulated within pure (symmetric) expectation values are difficult to calculate for projector quantum Monte Carlo approaches, but are critical in order to compute many of the important observable properties of electronic systems. Here, we investigate an approach for the sampling of unbiased reduced density matrices within the full configuration interaction quantum Monte Carlo dynamic, which requires only small computational overheads. This is achieved via an independent replica population of walkers in the dynamic, sampled alongside the original population. The resulting reduced density matrices are free from systematic error (beyond those present via constraints on the dynamicmore » itself) and can be used to compute a variety of expectation values and properties, with rapid convergence to an exact limit. A quasi-variational energy estimate derived from these density matrices is proposed as an accurate alternative to the projected estimator for multiconfigurational wavefunctions, while its variational property could potentially lend itself to accurate extrapolation approaches in larger systems.« less
Salt, Alec N; Hale, Shane A; Plonkte, Stefan K R
2006-05-15
Measurements of drug levels in the fluids of the inner ear are required to establish kinetic parameters and to determine the influence of specific local delivery protocols. For most substances, this requires cochlear fluids samples to be obtained for analysis. When auditory function is of primary interest, the drug level in the perilymph of scala tympani (ST) is most relevant, since drug in this scala has ready access to the auditory sensory cells. In many prior studies, ST perilymph samples have been obtained from the basal turn, either by aspiration through the round window membrane (RWM) or through an opening in the bony wall. A number of studies have demonstrated that such samples are likely to be contaminated with cerebrospinal fluid (CSF). CSF enters the basal turn of ST through the cochlear aqueduct when the bony capsule is perforated or when fluid is aspirated. The degree of sample contamination has, however, not been widely appreciated. Recent studies have shown that perilymph samples taken through the round window membrane are highly contaminated with CSF, with samples greater than 2microL in volume containing more CSF than perilymph. In spite of this knowledge, many groups continue to sample from the base of the cochlea, as it is a well-established method. We have developed an alternative, technically simple method to increase the proportion of ST perilymph in a fluid sample. The sample is taken from the apex of the cochlea, a site that is distant from the cochlear aqueduct. A previous problem with sampling through a perforation in the bone was that the native perilymph rapidly leaked out driven by CSF pressure and was lost to the middle ear space. We therefore developed a procedure to collect all the fluid that emerged from the perforated apex after perforation. We evaluated the method using a marker ion trimethylphenylammonium (TMPA). TMPA was applied to the perilymph of guinea pigs either by RW irrigation or by microinjection into the apical turn. The TMPA concentration of the fluid sample was compared with that measured in perilymph prior to taking the sample using a TMPA-selective microelectrode sealed into ST. Data were interpreted with a finite element model of the cochlear fluids that was used to simulate each aspect of the experiment. The correction of sample concentration back to the perilymph concentration prior to sampling can be performed based on the known ST volume (4.7microL in the guinea pig) and the sample volume. A more precise correction requires some knowledge of the profile of drug distribution along the cochlear prior to sampling. This method of sampling from the apex is technically simple and provides a larger sample volume with a greater proportion of perilymph compared to sampling through the RW.
Salt, Alec N.; Hale, Shane A.; Plontke, Stefan K. R.
2006-01-01
Measurements of drug levels in the fluids of the inner ear are required to establish kinetic parameters and to determine the influence of specific local delivery protocols. For most substances, this requires cochlear fluids samples to be obtained for analysis. When auditory function is of primary interest, the drug level in the perilymph of scala tympani (ST) is most relevant, since drug in this scala has ready access to the auditory sensory cells. In many prior studies, ST perilymph samples have been obtained from the basal turn, either by aspiration through the round window membrane (RWM) or through an opening in the bony wall. A number of studies have demonstrated that such samples are likely to be contaminated with cerebrospinal fluid (CSF). CSF enters the basal turn of ST through the cochlear aqueduct when the bony capsule is perforated or when fluid is aspirated. The degree of sample contamination has, however, not been widely appreciated. Recent studies have shown that perilymph samples taken through the round window membrane are highly contaminated with CSF, with samples greater than 2 μL in volume containing more CSF than perilymph. In spite of this knowledge, many groups continue to sample from the base of the cochlea, as it is a well-established method. We have developed an alternative, technically simple method to increase the proportion of ST perilymph in a fluid sample. The sample is taken from the apex of the cochlea, a site that is distant from the cochlear aqueduct. A previous problem with sampling through a perforation in the bone was that the native perilymph rapidly leaked out driven by CSF pressure and was lost to the middle ear space. We therefore developed a procedure to collect all the fluid that emerged from the perforated apex after perforation. We evaluated the method using a marker ion trimethylphenylammonium (TMPA). TMPA was applied to the perilymph of guinea pigs either by RW irrigation or by microinjection into the apical turn. The TMPA concentration of the fluid sample was compared with that measured in perilymph prior to taking the sample using a TMPA-selective microelectrode sealed into ST. Data were interpreted with a finite element model of the cochlear fluids that was used to simulate each aspect of the experiment. The correction of sample concentration back to the perilymph concentration prior to sampling can be performed based on the known ST volume (4.7 μL in the guinea pig) and the sample volume. A more precise correction requires some knowledge of the profile of drug distribution along the cochlear prior to sampling. This method of sampling from the apex is technically simple and provides a larger sample volume with a greater proportion of perilymph compared to sampling through the RW. PMID:16310856
Schweiger, Regev; Fisher, Eyal; Rahmani, Elior; Shenhav, Liat; Rosset, Saharon; Halperin, Eran
2018-06-22
Estimation of heritability is an important task in genetics. The use of linear mixed models (LMMs) to determine narrow-sense single-nucleotide polymorphism (SNP)-heritability and related quantities has received much recent attention, due of its ability to account for variants with small effect sizes. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. The common way to report the uncertainty in REML estimation uses standard errors (SEs), which rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals (CIs). In addition, for larger data sets (e.g., tens of thousands of individuals), the construction of SEs itself may require considerable time, as it requires expensive matrix inversions and multiplications. Here, we present FIESTA (Fast confidence IntErvals using STochastic Approximation), a method for constructing accurate CIs. FIESTA is based on parametric bootstrap sampling, and, therefore, avoids unjustified assumptions on the distribution of the heritability estimator. FIESTA uses stochastic approximation techniques, which accelerate the construction of CIs by several orders of magnitude, compared with previous approaches as well as to the analytical approximation used by SEs. FIESTA builds accurate CIs rapidly, for example, requiring only several seconds for data sets of tens of thousands of individuals, making FIESTA a very fast solution to the problem of building accurate CIs for heritability for all data set sizes.
High pressure inertial focusing for separating and concentrating bacteria at high throughput
NASA Astrophysics Data System (ADS)
Cruz, J.; Hooshmand Zadeh, S.; Graells, T.; Andersson, M.; Malmström, J.; Wu, Z. G.; Hjort, K.
2017-08-01
Inertial focusing is a promising microfluidic technology for concentration and separation of particles by size. However, there is a strong correlation of increased pressure with decreased particle size. Theory and experimental results for larger particles were used to scale down the phenomenon and find the conditions that focus 1 µm particles. High pressure experiments in robust glass chips were used to demonstrate the alignment. We show how the technique works for 1 µm spherical polystyrene particles and for Escherichia coli, not being harmful for the bacteria at 50 µl min-1. The potential to focus bacteria, simplicity of use and high throughput make this technology interesting for healthcare applications, where concentration and purification of a sample may be required as an initial step.
Exploring medical students' attitudes towards peer physical examination.
Rees, Charlotte E; Bradley, Paul; McLachlan, John C
2004-02-01
With opportunities for dissection and examination of sick patients decreasing, the role of peer physical examination (PPE) is increasing. This study explores students' attitudes towards PPE and the relationship between attitudes and demographics. A total of 129 first-year medical students from the Peninsula Medical School completed the Examining Fellow Students (EFS) questionnaire. At least 97% of students were comfortable participating in PPE of all body parts except breast and inguinal regions. Over 20% of students were unwilling to participate in PPE of the breast and inguinal regions. Students were more comfortable with PPE within gender than across gender. Females were more likely to be uncomfortable with PPE. Further research with larger sample sizes is required to determine whether attitudes are related to age and religious faith.
Low field magnetocaloric effect in bulk and ribbon alloy La(Fe0.88Si0.12)13
NASA Astrophysics Data System (ADS)
Vuong, Van-Hiep; Do-Thi, Kim-Anh; Nguyen, Duy-Thien; Nguyen, Quang-Hoa; Hoang, Nam-Nhat
2018-03-01
Low-field magnetocaloric effect occurred in itinerant metamagnetic materials is at core for magnetic cooling application. This works reports the magnetocaloric responses obtained at 1.35 T for the silicon-doped iron-based binary alloy La(Fe0.88Si0.12)13 in the bulk and ribbon form. Both samples possess a same symmetry but with different crystallite sizes and lattice parameters. The ribbon sample shows a larger maximum entropy change (nearly 8.5 times larger) and a higher Curie temperature (5 K higher) in comparison with that of the bulk sample. The obtained relative cooling power for the ribbon is also larger and very promising for application (RCP = 153 J/kg versus 25.2 J/kg for the bulk). The origin of the effect observed is assigned to the occurrence of negative magnetovolume effect in the ribbon structure with limit crystallization, caused by rapid cooling process at the preparation, which induced smaller crystallite size and large lattice constant at the overall weaker local crystal field.
Applications of Small Area Estimation to Generalization with Subclassification by Propensity Scores
ERIC Educational Resources Information Center
Chan, Wendy
2018-01-01
Policymakers have grown increasingly interested in how experimental results may generalize to a larger population. However, recently developed propensity score-based methods are limited by small sample sizes, where the experimental study is generalized to a population that is at least 20 times larger. This is particularly problematic for methods…
Livi, Kenneth J T; Villalobos, Mario; Leary, Rowan; Varela, Maria; Barnard, Jon; Villacís-García, Milton; Zanella, Rodolfo; Goodridge, Anna; Midgley, Paul
2017-09-12
Two synthetic goethites of varying crystal size distributions were analyzed by BET, conventional TEM, cryo-TEM, atomic resolution STEM and HRTEM, and electron tomography in order to determine the effects of crystal size, shape, and atomic scale surface roughness on their adsorption capacities. The two samples were determined by BET to have very different site densities based on Cr VI adsorption experiments. Model specific surface areas generated from TEM observations showed that, based on size and shape, there should be little difference in their adsorption capacities. Electron tomography revealed that both samples crystallized with an asymmetric {101} tablet habit. STEM and HRTEM images showed a significant increase in atomic-scale surface roughness of the larger goethite. This difference in roughness was quantified based on measurements of relative abundances of crystal faces {101} and {201} for the two goethites, and a reactive surface site density was calculated for each goethite. Singly coordinated sites on face {210} are 2.5 more dense than on face {101}, and the larger goethite showed an average total of 36% {210} as compared to 14% for the smaller goethite. This difference explains the considerably larger adsorption capacitiy of the larger goethite vs the smaller sample and points toward the necessity of knowing the atomic scale surface structure in predicting mineral adsorption processes.
Brady, Paul
2016-06-01
The larger holes or larger number of holes we drill in the coracoid, the weaker the coracoid becomes. Thus, minimizing bone holes (both size and number) is required to lower risk of coracoid process fracture, in patients in whom transosseous shoulder acromioclavicular joint reconstruction is indicated. A single 2.4-mm-diameter tunnel drilled through both the clavicle and the coracoid lowers the risk of fracture, but the risk cannot be entirely eliminated. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Li, Chunjia; Lu, Xin; Xu, Chaohua; Cai, Qing; Basnayake, Jayapathi; Lakshmanan, Prakash; Ghannoum, Oula; Fan, Yuanhong
2017-01-01
Abstract Sugarcane, derived from the hybridization of Saccharum officinarum×Saccharum spontaneum, is a vegetative crop in which the final yield is highly driven by culm biomass production. Cane yield under irrigated or rain-fed conditions could be improved by developing genotypes with leaves that have high intrinsic transpiration efficiency, TEi (CO2 assimilation/stomatal conductance), provided this is not offset by negative impacts from reduced conductance and growth rates. This study was conducted to partition genotypic variation in TEi among a sample of diverse clones from the Chinese collection of sugarcane-related germplasm into that due to variation in stomatal conductance versus that due to variation in photosynthetic capacity. A secondary goal was to define protocols for optimized larger-scale screening of germplasm collections. Genotypic variation in TEi was attributed to significant variation in both stomatal and photosynthetic components. A number of genotypes were found to possess high TEi as a result of high photosynthetic capacity. This trait combination is expected to be of significant breeding value. It was determined that a small number of observations (16) is sufficient for efficiently screening TEi in larger populations of sugarcane genotypes The research methodology and results reported are encouraging in supporting a larger-scale screening and introgression of high transpiration efficiency in sugarcane breeding. However, further research is required to quantify narrow sense heritability as well as the leaf-to-field translational potential of genotypic variation in transpiration efficiency-related traits observed in this study. PMID:28444313
Low Cost Heliostat Development Phase II Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kusek, Stephen M.
2013-04-30
The heliostat field in a central receiver plant makes up roughly one half of the total plant cost. As such, cost reductions for the installed heliostat price greatly impact the overall plant cost and hence the plant’s Levelized Cost of Energy. The general trend in heliostat size over the past decades has been to make them larger. One part of our thesis has been that larger and larger heliostats may drive the LCOE up instead of down due to the very nature of the precise aiming and wind-load requirements for typical heliostats. In other words, it requires more and moremore » structure to precisely aim the sunlight at the receiver as one increases heliostat mirror area and that it becomes counter-productive, cost-wise, at some point.« less
Metz, Thomas O.; Qian, Wei-Jun; Jacobs, Jon M.; Gritsenko, Marina A.; Moore, Ronald J.; Polpitiya, Ashoka D.; Monroe, Matthew E.; Camp, David G.; Mueller, Patricia W.; Smith, Richard D.
2009-01-01
Novel biomarkers of type 1 diabetes must be identified and validated in initial, exploratory studies before they can be assessed in proficiency evaluations. Currently, untargeted “-omics” approaches are under-utilized in profiling studies of clinical samples. This report describes the evaluation of capillary liquid chromatography (LC) coupled with mass spectrometry (MS) in a pilot proteomic analysis of human plasma and serum from a subset of control and type 1 diabetic individuals enrolled in the Diabetes Autoantibody Standardization Program with the goal of identifying candidate biomarkers of type 1 diabetes. Initial high-resolution capillary LC-MS/MS experiments were performed to augment an existing plasma peptide database, while subsequent LC-FTICR studies identified quantitative differences in the abundance of plasma proteins. Analysis of LC-FTICR proteomic data identified five candidate protein biomarkers of type 1 diabetes. Alpha-2-glycoprotein 1 (zinc), corticosteroid-binding globulin, and lumican were 2-fold up-regulated in type 1 diabetic samples relative to control samples, whereas clusterin and serotransferrin were 2-fold up-regulated in control samples relative to type 1 diabetic samples. Observed perturbations in the levels of all five proteins are consistent with the metabolic aberrations found in type 1 diabetes. While the discovery of these candidate protein biomarkers of type 1 diabetes is encouraging, follow up studies are required for validation in a larger population of individuals and for determination of laboratory-defined sensitivity and specificity values using blinded samples. PMID:18092746
Metz, Thomas O; Qian, Wei-Jun; Jacobs, Jon M; Gritsenko, Marina A; Moore, Ronald J; Polpitiya, Ashoka D; Monroe, Matthew E; Camp, David G; Mueller, Patricia W; Smith, Richard D
2008-02-01
Novel biomarkers of type 1 diabetes must be identified and validated in initial, exploratory studies before they can be assessed in proficiency evaluations. Currently, untargeted "-omics" approaches are underutilized in profiling studies of clinical samples. This report describes the evaluation of capillary liquid chromatography (LC) coupled with mass spectrometry (MS) in a pilot proteomic analysis of human plasma and serum from a subset of control and type 1 diabetic individuals enrolled in the Diabetes Autoantibody Standardization Program, with the goal of identifying candidate biomarkers of type 1 diabetes. Initial high-resolution capillary LC-MS/MS experiments were performed to augment an existing plasma peptide database, while subsequent LC-FTICR studies identified quantitative differences in the abundance of plasma proteins. Analysis of LC-FTICR proteomic data identified five candidate protein biomarkers of type 1 diabetes. alpha-2-Glycoprotein 1 (zinc), corticosteroid-binding globulin, and lumican were 2-fold up-regulated in type 1 diabetic samples relative to control samples, whereas clusterin and serotransferrin were 2-fold up-regulated in control samples relative to type 1 diabetic samples. Observed perturbations in the levels of all five proteins are consistent with the metabolic aberrations found in type 1 diabetes. While the discovery of these candidate protein biomarkers of type 1 diabetes is encouraging, follow up studies are required for validation in a larger population of individuals and for determination of laboratory-defined sensitivity and specificity values using blinded samples.
Development of reaction-sintered SiC mirror for space-borne optics
NASA Astrophysics Data System (ADS)
Yui, Yukari Y.; Kimura, Toshiyoshi; Tange, Yoshio
2017-11-01
We are developing high-strength reaction-sintered silicon carbide (RS-SiC) mirror as one of the new promising candidates for large-diameter space-borne optics. In order to observe earth surface or atmosphere with high spatial resolution from geostationary orbit, larger diameter primary mirrors of 1-2 m are required. One of the difficult problems to be solved to realize such optical system is to obtain as flat mirror surface as possible that ensures imaging performance in infrared - visible - ultraviolet wavelength region. This means that homogeneous nano-order surface flatness/roughness is required for the mirror. The high-strength RS-SiC developed and manufactured by TOSHIBA is one of the most excellent and feasible candidates for such purpose. Small RS-SiC plane sample mirrors have been manufactured and basic physical parameters and optical performances of them have been measured. We show the current state of the art of the RS-SiC mirror and the feasibility of a large-diameter RS-SiC mirror for space-borne optics.
Biomarkers for Psychiatry: The Journey from Fantasy to Fact, a Report of the 2013 CINP Think Tank
Millan, Mark J.; Bahn, Sabine; Bertolino, Alessandro; Turck, Christoph W.; Kapur, Shitij; Möller, Hans-Jürgen; Dean, Brian
2015-01-01
Background: A think tank sponsored by the Collegium Internationale Neuropsychopharmacologium (CINP) debated the status and prospects of biological markers for psychiatric disorders, focusing on schizophrenia and major depressive disorder. Methods: Discussions covered markers defining and predicting specific disorders or domains of dysfunction, as well as predicting and monitoring medication efficacy. Deliberations included clinically useful and viable biomarkers, why suitable markers are not available, and the need for tightly-controlled sample collection. Results: Different types of biomarkers, appropriate sensitivity, specificity, and broad-based exploitability were discussed. Whilst a number of candidates are in the discovery phases, all will require replication in larger, real-life cohorts. Clinical cost-effectiveness also needs to be established. Conclusions: Since a single measure is unlikely to suffice, multi-modal strategies look more promising, although they bring greater technical and implementation complexities. Identifying reproducible, robust biomarkers will probably require pre-competitive consortia to provide the resources needed to identify, validate, and develop the relevant clinical tests. PMID:25899066
A national survey of clinical pharmacy services in county hospitals in China.
Yao, Dongning; Xi, Xiaoyu; Huang, Yuankai; Hu, Hao; Hu, Yuanjia; Wang, Yitao; Yao, Wenbing
2017-01-01
Clinical pharmacy is not only a medical science but also an elaborate public health care system firmly related to its subsystems of education, training, qualification authentication, scientific research, management, and human resources. China is a developing country with a tremendous need for improvements in the public health system, including the clinical pharmacy service system. The aim of this research was to evaluate the infrastructure and personnel qualities of clinical pharmacy services in China. Public county hospitals in China. A national survey of clinical pharmacists in county hospitals was conducted. It was sampled through a stratified sampling strategy. Responses were analyzed using descriptive and inferential statistics. The main outcome measures include the coverage of clinical pharmacy services, the overall staffing of clinical pharmacists, the software and hardware of clinical pharmacy services, the charge mode of clinical pharmacy services, and the educational background, professional training acquisition, practical experience, and entry path of clinical pharmacists. The overall coverage of clinical pharmacy services on both the department scale (median = 18.25%) and the patient scale (median = 15.38%) does not meet the 100% coverage that is required by the government. In 57.73% of the sample hospitals, the staffing does not meet the requirement, and the size of the clinical pharmacist group is smaller in larger hospitals. In addition, 23.4% of the sample hospitals do not have management rules for the clinical pharmacists, and 43.1% do not have rational drug use software, both of which are required by the government. In terms of fees, 89.9% of the sample hospitals do not charge for the services. With regard to education, 8.5% of respondents are with unqualified degree, and among respondents with qualified degree, 37.31% are unqualified in the major; 43% of respondents lack the clinical pharmacist training required by the government. Most respondents (93.5%) have a primary or medium professional title. The median age and work seniority of respondents are 31 and four years, respectively. Only 18.5% of respondents chose this occupation by personal consideration or willingness. The main findings in this research include the overall low coverage of clinical pharmacy services, the low rate of clinical pharmacy service software, hardware, and personnel as well as a wide variance in educational training of pharmacists at county hospitals.
Hu, Youxin; Shanjani, Yaser; Toyserkani, Ehsan; Grynpas, Marc; Wang, Rizhi; Pilliar, Robert
2014-02-01
Porous calcium polyphosphate (CPP) structures proposed as bone-substitute implants and made by sintering CPP powders to form bending test samples of approximately 35 vol % porosity were machined from preformed blocks made either by additive manufacturing (AM) or conventional gravity sintering (CS) methods and the structure and mechanical characteristics of samples so made were compared. AM-made samples displayed higher bending strengths (≈1.2-1.4 times greater than CS-made samples), whereas elastic constant (i.e., effective elastic modulus of the porous structures) that is determined by material elastic modulus and structural geometry of the samples was ≈1.9-2.3 times greater for AM-made samples. X-ray diffraction analysis showed that samples made by either method displayed the same crystal structure forming β-CPP after sinter annealing. The material elastic modulus, E, determined using nanoindentation tests also showed the same value for both sample types (i.e., E ≈ 64 GPa). Examination of the porous structures indicated that significantly larger sinter necks resulted in the AM-made samples which presumably resulted in the higher mechanical properties. The development of mechanical properties was attributed to the different sinter anneal procedures required to make 35 vol % porous samples by the two methods. A primary objective of the present study, in addition to reporting on bending strength and sample stiffness (elastic constant) characteristics, was to determine why the two processes resulted in the observed mechanical property differences for samples of equivalent volume percentage of porosity. An understanding of the fundamental reason(s) for the observed effect is considered important for developing improved processes for preparation of porous CPP implants as bone substitutes for use in high load-bearing skeletal sites. Copyright © 2013 Wiley Periodicals, Inc.
The role of encoding and attention in facial emotion memory: an EEG investigation.
Brenner, Colleen A; Rumak, Samuel P; Burns, Amy M N; Kieffaber, Paul D
2014-09-01
Facial expressions are encoded via sensory mechanisms, but meaning extraction and salience of these expressions involve cognitive functions. We investigated the time course of sensory encoding and subsequent maintenance in memory via EEG. Twenty-nine healthy participants completed a facial emotion delayed match-to-sample task. P100, N170 and N250 ERPs were measured in response to the first stimulus, and evoked theta power (4-7Hz) was measured during the delay interval. Negative facial expressions produced larger N170 amplitudes and greater theta power early in the delay. N170 amplitude correlated with theta power, however larger N170 amplitude coupled with greater theta power only predicted behavioural performance for one emotion condition (very happy) out of six tested (see Supplemental Data). These findings indicate that the N170 ERP may be sensitive to emotional facial expressions when task demands require encoding and retention of this information. Furthermore, sustained theta activity may represent continued attentional processing that supports short-term memory, especially of negative facial stimuli. Further study is needed to investigate the potential influence of these measures, and their interaction, on behavioural performance. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.
Library Construction from Subnanogram DNA for Pelagic Sea Water and Deep-Sea Sediments
Hirai, Miho; Nishi, Shinro; Tsuda, Miwako; Sunamura, Michinari; Takaki, Yoshihiro; Nunoura, Takuro
2017-01-01
Shotgun metagenomics is a low biased technology for assessing environmental microbial diversity and function. However, the requirement for a sufficient amount of DNA and the contamination of inhibitors in environmental DNA leads to difficulties in constructing a shotgun metagenomic library. We herein examined metagenomic library construction from subnanogram amounts of input environmental DNA from subarctic surface water and deep-sea sediments using two library construction kits: the KAPA Hyper Prep Kit and Nextera XT DNA Library Preparation Kit, with several modifications. The influence of chemical contaminants associated with these environmental DNA samples on library construction was also investigated. Overall, shotgun metagenomic libraries were constructed from 1 pg to 1 ng of input DNA using both kits without harsh library microbial contamination. However, the libraries constructed from 1 pg of input DNA exhibited larger biases in GC contents, k-mers, or small subunit (SSU) rRNA gene compositions than those constructed from 10 pg to 1 ng DNA. The lower limit of input DNA for low biased library construction in this study was 10 pg. Moreover, we revealed that technology-dependent biases (physical fragmentation and linker ligation vs. tagmentation) were larger than those due to the amount of input DNA. PMID:29187708
GIS-based niche modeling for mapping species' habitats
Rotenberry, J.T.; Preston, K.L.; Knick, S.
2006-01-01
Ecological a??niche modelinga?? using presence-only locality data and large-scale environmental variables provides a powerful tool for identifying and mapping suitable habitat for species over large spatial extents. We describe a niche modeling approach that identifies a minimum (rather than an optimum) set of basic habitat requirements for a species, based on the assumption that constant environmental relationships in a species' distribution (i.e., variables that maintain a consistent value where the species occurs) are most likely to be associated with limiting factors. Environmental variables that take on a wide range of values where a species occurs are less informative because they do not limit a species' distribution, at least over the range of variation sampled. This approach is operationalized by partitioning Mahalanobis D2 (standardized difference between values of a set of environmental variables for any point and mean values for those same variables calculated from all points at which a species was detected) into independent components. The smallest of these components represents the linear combination of variables with minimum variance; increasingly larger components represent larger variances and are increasingly less limiting. We illustrate this approach using the California Gnatcatcher (Polioptila californica Brewster) and provide SAS code to implement it.
Dampier, Carlton D.; Wager, Carrie G.; Harrison, Ryan; Hsu, Lewis L.; Minniti, Caterina P.; Smith, Wally R.
2012-01-01
Clinical trials of sickle cell disease (SCD) pain treatment usually observe only small decrements in pain intensity during the course of hospitalization. Sub-optimal analgesic management and inadequate pain assessment methods are possible explanations for these findings. In a search for better methods for assessing inpatient SCD pain in adults, we examined several pain intensity and interference measures in both arms of a randomized controlled trial comparing two different opioid PCA therapies. Based upon longitudinal analysis of pain episodes, we found that scores from daily average Visual Analogue Scales (VAS) and several other measures, especially the Brief Pain Inventory (BPI), were sensitive to change in daily improvements in pain intensity associated with resolution of vaso-occlusive pain. In this preliminary trial, the low demand, high basal infusion (LDHI) strategy demonstrated faster, larger improvements in various measures of pain than the high demand, low basal infusion (HDLI) strategy for opioid PCA dosing, however, verification in larger studies is required. The measures and statistical approaches used in this analysis may facilitate design, reduce sample size, and improve analyses of treatment response in future SCD clinical trials of vaso-occlusive episodes. PMID:22886853
NASA Technical Reports Server (NTRS)
Thompson, Lucy M.; Spray, John G.
1992-01-01
The Archean Levack Gneisses of the North Range host millimeter-thick veins and centimeter-thick lenses of pseudotachylyte, as well as substantially larger meter-wide, dykelike bodies of pseudotachylytic 'breccia'. The 'breccia' occurs up to several tens of kilometers away from the Sudbury Igneous Complex and is commonly sited within or near joints and other natural weaknesses such as bedding, dyke contacts, and lithological boundaries. The larger 'breccia' dykes comprise a generally dark matrix containing rounded to subrounded and occasionally angular rock fragments derived predominantly from Levack Gneiss. Selected samples of bulk Sudbury Breccia and Sudbury Breccia matrices were chemically analyzed and compared to existing data on the Levack Gneisses and Sudbury Breccia. The matrices are apparently enriched in Fe and, to a lesser extent, Mg, Ti, and Ca compared to the wallrocks and the majority of clasts. This enrichment can be partly explained by the preferential cataclasis and/or frictional melting of hydrous ferromagnesian wallrock minerals, but also appear to require contamination by more basic exotic lithologies. This suggests that certain components of pseudotachylitic Sudbury Breccia have undergone significant transport during their formation.
Klymiuk, Ingeborg; Bambach, Isabella; Patra, Vijaykumar; Trajanoski, Slave; Wolf, Peter
2016-01-01
Microbiome research and improvements in high throughput sequencing technologies revolutionize our current scientific viewpoint. The human associated microbiome is a prominent focus of clinical research. Large cohort studies are often required to investigate the human microbiome composition and its changes in a multitude of human diseases. Reproducible analyses of large cohort samples require standardized protocols in study design, sampling, storage, processing, and data analysis. In particular, the effect of sample storage on actual results is critical for reproducibility. So far, the effect of storage conditions on the results of microbial analysis has been examined for only a few human biological materials (e.g., stool samples). There is a lack of data and information on appropriate storage conditions on other human derived samples, such as skin. Here, we analyzed skin swab samples collected from three different body locations (forearm, V of the chest and back) of eight healthy volunteers. The skin swabs were soaked in sterile buffer and total DNA was isolated after freezing at -80°C for 24 h, 90 or 365 days. Hypervariable regions V1-2 were amplified from total DNA and libraries were sequenced on an Illumina MiSeq desktop sequencer in paired end mode. Data were analyzed using Qiime 1.9.1. Summarizing all body locations per time point, we found no significant differences in alpha diversity and multivariate community analysis among the three time points. Considering body locations separately significant differences in the richness of forearm samples were found between d0 vs. d90 and d90 vs. d365. Significant differences in the relative abundance of major skin genera (Propionibacterium, Streptococcus, Bacteroides, Corynebacterium, and Staphylococcus) were detected in our samples in Bacteroides only among all time points in forearm samples and between d0 vs. d90 and d90 vs. d365 in V of the chest and back samples. Accordingly, significant differences were detected in the ratios of the main phyla Actinobacteria, Firmicutes, and Bacteroidetes: Actinobacteria vs. Bacteroidetes at d0 vs. d90 (p-value = 0.0234), at d0 vs. d365 (p-value = 0.0234) and d90 vs. d365 (p-value = 0.0234) in forearm samples and at d90 vs. d365 in V of the chest (p-value = 0.0234) and back samples (p-value = 0.0234). The ratios of Firmicutes vs. Bacteroidetes showed no significant changes in any of the body locations as well as the ratios of Actinobacteria vs. Firmicutes at any time point. Studies with larger sample sizes are required to verify our results and determine long term storage effects with regard to specific biological questions. PMID:28066342
A Cost Benefit Analysis of Emerging LED Water Purification Systems in Expeditionary Environments
2017-03-23
the initial contingency response phase, ROWPUs are powered by large generators which require relatively large amounts of fossil fuels. The amount of...they attract and cling together forming a larger particle (Chem Treat, 2016). Flocculation is the addition of a polymer to water that clumps...smaller particles together to form larger particles. The idea for both methods is that larger particles will either settle out of or be removed from the
Using pseudoalignment and base quality to accurately quantify microbial community composition
Novembre, John
2018-01-01
Pooled DNA from multiple unknown organisms arises in a variety of contexts, for example microbial samples from ecological or human health research. Determining the composition of pooled samples can be difficult, especially at the scale of modern sequencing data and reference databases. Here we propose a novel method for taxonomic profiling in pooled DNA that combines the speed and low-memory requirements of k-mer based pseudoalignment with a likelihood framework that uses base quality information to better resolve multiply mapped reads. We apply the method to the problem of classifying 16S rRNA reads using a reference database of known organisms, a common challenge in microbiome research. Using simulations, we show the method is accurate across a variety of read lengths, with different length reference sequences, at different sample depths, and when samples contain reads originating from organisms absent from the reference. We also assess performance in real 16S data, where we reanalyze previous genetic association data to show our method discovers a larger number of quantitative trait associations than other widely used methods. We implement our method in the software Karp, for k-mer based analysis of read pools, to provide a novel combination of speed and accuracy that is uniquely suited for enhancing discoveries in microbial studies. PMID:29659582
GHSI EMERGENCY RADIONUCLIDE BIOASSAY LABORATORY NETWORK: SUMMARY OF A RECENT EXERCISE.
Li, Chunsheng; Ansari, Armin; Bartizel, Christine; Battisti, Paolo; Franck, Didier; Gerstmann, Udo; Giardina, Isabella; Guichet, Claude; Hammond, Derek; Hartmann, Martina; Jones, Robert L; Kim, Eunjoo; Ko, Raymond; Morhard, Ryan; Quayle, Deborah; Sadi, Baki; Saunders, David; Paquet, Francois
2016-11-01
The Global Health Security Initiative (GHSI) established a laboratory network within the GHSI community to develop their collective surge capacity for radionuclide bioassay in response to a radiological or nuclear emergency. A recent exercise was conducted to test the participating laboratories for their capabilities in screening and in vitro assay of biological samples, performing internal dose assessment and providing advice on medical intervention, if necessary, using a urine sample spiked with a single radionuclide, 241 Am. The laboratories were required to submit their reports according to the exercise schedule and using pre-formatted templates. Generally, the participating laboratories were found to be capable with respect to rapidly screening samples for radionuclide contamination, measuring the radionuclide in the samples, assessing the intake and radiation dose, and providing advice on medical intervention. However, gaps in bioassay measurement and dose assessment have been identified. The network may take steps to ensure that procedures and practices within this network be harmonised and a follow-up exercise be organised on a larger scale, with potential participation of laboratories from the networks coordinated by the International Atomic Energy Agency and the World Health Organization. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Methane Leaks from Natural Gas Systems Follow Extreme Distributions.
Brandt, Adam R; Heath, Garvin A; Cooley, Daniel
2016-11-15
Future energy systems may rely on natural gas as a low-cost fuel to support variable renewable power. However, leaking natural gas causes climate damage because methane (CH 4 ) has a high global warming potential. In this study, we use extreme-value theory to explore the distribution of natural gas leak sizes. By analyzing ∼15 000 measurements from 18 prior studies, we show that all available natural gas leakage data sets are statistically heavy-tailed, and that gas leaks are more extremely distributed than other natural and social phenomena. A unifying result is that the largest 5% of leaks typically contribute over 50% of the total leakage volume. While prior studies used log-normal model distributions, we show that log-normal functions poorly represent tail behavior. Our results suggest that published uncertainty ranges of CH 4 emissions are too narrow, and that larger sample sizes are required in future studies to achieve targeted confidence intervals. Additionally, we find that cross-study aggregation of data sets to increase sample size is not recommended due to apparent deviation between sampled populations. Understanding the nature of leak distributions can improve emission estimates, better illustrate their uncertainty, allow prioritization of source categories, and improve sampling design. Also, these data can be used for more effective design of leak detection technologies.
Methane Leaks from Natural Gas Systems Follow Extreme Distributions
Brandt, Adam R.; Heath, Garvin A.; Cooley, Daniel
2016-10-14
Future energy systems may rely on natural gas as a low-cost fuel to support variable renewable power. However, leaking natural gas causes climate damage because methane (CH 4) has a high global warming potential. In this study, we use extreme-value theory to explore the distribution of natural gas leak sizes. By analyzing ~15,000 measurements from 18 prior studies, we show that all available natural gas leakage datasets are statistically heavy-tailed, and that gas leaks are more extremely distributed than other natural and social phenomena. A unifying result is that the largest 5% of leaks typically contribute over 50% of themore » total leakage volume. While prior studies used lognormal model distributions, we show that lognormal functions poorly represent tail behavior. Our results suggest that published uncertainty ranges of CH 4 emissions are too narrow, and that larger sample sizes are required in future studies to achieve targeted confidence intervals. Additionally, we find that cross-study aggregation of datasets to increase sample size is not recommended due to apparent deviation between sampled populations. Finally, understanding the nature of leak distributions can improve emission estimates, better illustrate their uncertainty, allow prioritization of source categories, and improve sampling design. Also, these data can be used for more effective design of leak detection technologies.« less
Methane Leaks from Natural Gas Systems Follow Extreme Distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandt, Adam R.; Heath, Garvin A.; Cooley, Daniel
Future energy systems may rely on natural gas as a low-cost fuel to support variable renewable power. However, leaking natural gas causes climate damage because methane (CH 4) has a high global warming potential. In this study, we use extreme-value theory to explore the distribution of natural gas leak sizes. By analyzing ~15,000 measurements from 18 prior studies, we show that all available natural gas leakage datasets are statistically heavy-tailed, and that gas leaks are more extremely distributed than other natural and social phenomena. A unifying result is that the largest 5% of leaks typically contribute over 50% of themore » total leakage volume. While prior studies used lognormal model distributions, we show that lognormal functions poorly represent tail behavior. Our results suggest that published uncertainty ranges of CH 4 emissions are too narrow, and that larger sample sizes are required in future studies to achieve targeted confidence intervals. Additionally, we find that cross-study aggregation of datasets to increase sample size is not recommended due to apparent deviation between sampled populations. Finally, understanding the nature of leak distributions can improve emission estimates, better illustrate their uncertainty, allow prioritization of source categories, and improve sampling design. Also, these data can be used for more effective design of leak detection technologies.« less
Insecticide resistance, control failure likelihood and the First Law of Geography.
Guedes, Raul Narciso C
2017-03-01
Insecticide resistance is a broadly recognized ecological backlash resulting from insecticide use and is widely reported among arthropod pest species with well-recognized underlying mechanisms and consequences. Nonetheless, insecticide resistance is the subject of evolving conceptual views that introduces a different concept useful if recognized in its own right - the risk or likelihood of control failure. Here we suggest an experimental approach to assess the likelihood of control failure of an insecticide allowing for consistent decision-making regarding management of insecticide resistance. We also challenge the current emphasis on limited spatial sampling of arthropod populations for resistance diagnosis in favor of comprehensive spatial sampling. This necessarily requires larger population sampling - aiming to use spatial analysis in area-wide surveys - to recognize focal points of insecticide resistance and/or control failure that will better direct management efforts. The continuous geographical scale of such surveys will depend on the arthropod pest species, the pattern of insecticide use and many other potential factors. Regardless, distance dependence among sampling sites should still hold, following the maxim that the closer two things are, the more they resemble each other, which is the basis of Tobler's First Law of Geography. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Bacterial diversity and community structure in lettuce soil are shifted by cultivation time
NASA Astrophysics Data System (ADS)
Liu, Yiqian; Chang, Qing; Guo, Xu; Yi, Xinxin
2017-08-01
Compared with cereal production, vegetable production usually requires a greater degree of management and larger input of nutrients and irrigation, but these systems are not sustainable in the long term. This study aimed to what extent lettuce determine the bacterial community composition in the soil, during lettuce cultivation, pesticides and fertilizers were not apply to soil. Soil samples were collected from depths of 0-20cm and 20-40cm. A highthroughput sequencing approach was employed to investigate bacterial communities in lettuce-cultivated soil samples in a time-dependent manner. The dominant bacteria in the lettuce soil samples were mainly Proteobacteria, Actinobacteria, Chloroflexi, Nitrospirae, Firmicutes, Acidobacteria, Bacteroidetes, Verrucomicrobia, Planctomycetes, Gemmatimo nadetes, Cyanobacteria. Proteobacteria was the most abundant phylum in the 6 soil samples. The relative abundance of Acidobacteria, Firmicutes, Bacteroidetes, Verrucomicrobia and Cyanobacteria decreased through time of lettuce cultivation, but the relative abundance of Proteobacteria, Actinobacteria, Gemmatimonadetes, Chloroflexi, Planctomycetes and Nitrospirae increased over time. In the 0-20cm depth group and the 20-40cm depth soil, a similar pattern was observed that the percentage number of only shared OTUs between the early and late stage was lower than that between the early and middle stage soil, the result showed that lettuce growth can affect structure of soil bacterial communities.
Pico-CSIA: Picomolar Scale Compound-Specific Isotope Analyses
NASA Astrophysics Data System (ADS)
Baczynski, A. A.; Polissar, P. J.; Juchelka, D.; Schwieters, J. B.; Hilkert, A.; Freeman, K. H.
2016-12-01
The basic approach to analyzing molecular isotopes has remained largely unchanged since the late 1990s. Conventional compound-specific isotope analyses (CSIA) are conducted using capillary gas chromatography (GC), a combustion interface, and an isotope-ratio mass spectrometer (IRMS). Commercially available GC-IRMS systems are comprised of components with inner diameters ≥0.25 mm and employ helium flow rates of 1-4 mL/min. These flow rates are an order of magnitude larger than what the IRMS can accept. Consequently, ≥90% of the sample is lost through the open split, and 1-10s of nanomoles of carbon are required for analysis. These sample requirements are prohibitive for many biomarkers, which are often present in picomolar concentrations. We utilize the resolving power and low flows of narrow-bore capillary GC to improve the sensitivity of CSIA. Narrow bore capillary columns (<0.25 mm ID) allow low helium flow rates of ≤0.5mL/min for more efficient sample transfer to the ion source of the IRMS while maintaining the high linear flow rates necessary to preserve narrow peak widths ( 250 ms). The IRMS has been fitted with collector amplifiers configured to 25 ms response times for rapid data acquisition across narrow peaks. Previous authors (e.g., Sacks et al., 2007) successfully demonstrated improved sensitivity afforded by narrow-bore GC columns. They reported an accuracy and precision of 1.4‰ for peaks with an average width at half maximum of 720 ms for 100 picomoles of carbon on column. Our method builds on their advances and further reduces peak widths ( 600 ms) and the amount of sample lost prior to isotopic analysis. Preliminary experiments with 100 picomoles of carbon on column show an accuracy and standard deviation <1‰. With further improvement, we hope to demonstrate robust isotopic analysis of 10s of picomoles of carbon, more than 2 orders of magnitude lower than commercial systems. The pico-CSIA method affords high-precision isotopic analyses for picomoles of carbon in organic biomarkers, which significantly lowers sample size requirements and broadens analytical windows in paleoclimate, astrobiological, and biogeochemical research.
Surface degassing and modifications to vesicle size distributions in active basalt flows
Cashman, K.V.; Mangan, M.T.; Newman, S.
1994-01-01
The character of the vesicle population in lava flows includes several measurable parameters that may provide important constraints on lava flow dynamics and rheology. Interpretation of vesicle size distributions (VSDs), however, requires an understanding of vesiculation processes in feeder conduits, and of post-eruption modifications to VSDs during transport and emplacement. To this end we collected samples from active basalt flows at Kilauea Volcano: (1) near the effusive Kupaianaha vent; (2) through skylights in the approximately isothermal Wahaula and Kamoamoa tube systems transporting lava to the coast; (3) from surface breakouts at different locations along the lava tubes; and (4) from different locations in a single breakout from a lava tube 1 km from the 51 vent at Pu'u 'O'o. Near-vent samples are characterized by VSDs that show exponentially decreasing numbers of vesicles with increasing vesicle size. These size distributions suggest that nucleation and growth of bubbles were continuous during ascent in the conduit, with minor associated bubble coalescence resulting from differential bubble rise. The entire vesicle population can be attributed to shallow exsolution of H2O-dominated gases at rates consistent with those predicted by simple diffusion models. Measurements of H2O, CO2 and S in the matrix glass show that the melt equilibrated rapidly at atmospheric pressure. Down-tube samples maintain similar VSD forms but show a progressive decrease in both overall vesicularity and mean vesicle size. We attribute this change to open system, "passive" rise and escape of larger bubbles to the surface. Such gas loss from the tube system results in the output of 1.2 ?? 106 g/day SO2, an output representing an addition of approximately 1% to overall volatile budget calculations. A steady increase in bubble number density with downstream distance is best explained by continued bubble nucleation at rates of 7-8/cm3s. Rates are ???25% of those estimated from the vent samples, and thus represent volatile supersaturations considerably less than those of the conduit. We note also that the small total volume represented by this new bubble population does not: (1) measurably deplete the melt in volatiles; or (2) make up for the overall vesicularity decrease resulting from the loss of larger bubbles. Surface breakout samples have distinctive VSDs characterized by an extreme depletion in the small vesicle population. This results in samples with much lower number densities and larger mean vesicle sizes than corresponding tube samples. Similar VSD patterns have been observed in solidified lava flows and are interpreted to result from either static (wall rupture) or dynamic (bubble rise and capture) coalescence. Through comparison with vent and tube vesicle populations, we suggest that, in addition to coalescence, the observed vesicle populations in the breakout samples have experienced a rapid loss of small vesicles consistent with 'ripening' of the VSD resulting from interbubble diffusion of volatiles. Confinement of ripening features to surface flows suggests that the thin skin that forms on surface breakouts may play a role in the observed VSD modification. ?? 1994.
Simulation of Particle Size Effect on Dynamic Properties and Fracture of PTFE-W-Al Composites
NASA Astrophysics Data System (ADS)
Herbold, Eric; Cai, Jing; Benson, David; Nesterenko, Vitali
2007-06-01
Recent investigations of the dynamic compressive strength of cold isostatically pressed (CIP) composites of polytetrafluoroethylene (PTFE), tungsten and aluminum powders show significant differences depending on the size of metallic particles. PTFE and aluminum mixtures are known to be energetic under dynamic and thermal loading. The addition of tungsten increases density and overall strength of the sample. Multi-material Eulerian and arbitrary Lagrangian-Eulerian methods were used for the investigation due to the complexity of the microstructure, relatively large deformations and the ability to handle the formation of free surfaces in a natural manner. The calculations indicate that the observed dependence of sample strength on particle size is due to the formation of force chains under dynamic loading in samples with small particle sizes even at larger porosity in comparison with samples with large grain size and larger density.
Decision Making and Learning while Taking Sequential Risks
ERIC Educational Resources Information Center
Pleskac, Timothy J.
2008-01-01
A sequential risk-taking paradigm used to identify real-world risk takers invokes both learning and decision processes. This article expands the paradigm to a larger class of tasks with different stochastic environments and different learning requirements. Generalizing a Bayesian sequential risk-taking model to the larger set of tasks clarifies…
Stow, Sarah M; Goodwin, Cody R; Kliman, Michal; Bachmann, Brian O; McLean, John A; Lybrand, Terry P
2014-12-04
Ion mobility-mass spectrometry (IM-MS) allows the separation of ionized molecules based on their charge-to-surface area (IM) and mass-to-charge ratio (MS), respectively. The IM drift time data that is obtained is used to calculate the ion-neutral collision cross section (CCS) of the ionized molecule with the neutral drift gas, which is directly related to the ion conformation and hence molecular size and shape. Studying the conformational landscape of these ionized molecules computationally provides interpretation to delineate the potential structures that these CCS values could represent, or conversely, structural motifs not consistent with the IM data. A challenge in the IM-MS community is the ability to rapidly compute conformations to interpret natural product data, a class of molecules exhibiting a broad range of biological activity. The diversity of biological activity is, in part, related to the unique structural characteristics often observed for natural products. Contemporary approaches to structurally interpret IM-MS data for peptides and proteins typically utilize molecular dynamics (MD) simulations to sample conformational space. However, MD calculations are computationally expensive, they require a force field that accurately describes the molecule of interest, and there is no simple metric that indicates when sufficient conformational sampling has been achieved. Distance geometry is a computationally inexpensive approach that creates conformations based on sampling different pairwise distances between the atoms within the molecule and therefore does not require a force field. Progressively larger distance bounds can be used in distance geometry calculations, providing in principle a strategy to assess when all plausible conformations have been sampled. Our results suggest that distance geometry is a computationally efficient and potentially superior strategy for conformational analysis of natural products to interpret gas-phase CCS data.
Gordon, Derek; Londono, Douglas; Patel, Payal; Kim, Wonkuk; Finch, Stephen J; Heiman, Gary A
2016-01-01
Our motivation here is to calculate the power of 3 statistical tests used when there are genetic traits that operate under a pleiotropic mode of inheritance and when qualitative phenotypes are defined by use of thresholds for the multiple quantitative phenotypes. Specifically, we formulate a multivariate function that provides the probability that an individual has a vector of specific quantitative trait values conditional on having a risk locus genotype, and we apply thresholds to define qualitative phenotypes (affected, unaffected) and compute penetrances and conditional genotype frequencies based on the multivariate function. We extend the analytic power and minimum-sample-size-necessary (MSSN) formulas for 2 categorical data-based tests (genotype, linear trend test [LTT]) of genetic association to the pleiotropic model. We further compare the MSSN of the genotype test and the LTT with that of a multivariate ANOVA (Pillai). We approximate the MSSN for statistics by linear models using a factorial design and ANOVA. With ANOVA decomposition, we determine which factors most significantly change the power/MSSN for all statistics. Finally, we determine which test statistics have the smallest MSSN. In this work, MSSN calculations are for 2 traits (bivariate distributions) only (for illustrative purposes). We note that the calculations may be extended to address any number of traits. Our key findings are that the genotype test usually has lower MSSN requirements than the LTT. More inclusive thresholds (top/bottom 25% vs. top/bottom 10%) have higher sample size requirements. The Pillai test has a much larger MSSN than both the genotype test and the LTT, as a result of sample selection. With these formulas, researchers can specify how many subjects they must collect to localize genes for pleiotropic phenotypes. © 2017 S. Karger AG, Basel.
Spatial considerations during cryopreservation of a large volume sample.
Kilbride, Peter; Lamb, Stephen; Milne, Stuart; Gibbons, Stephanie; Erro, Eloy; Bundy, James; Selden, Clare; Fuller, Barry; Morris, John
2016-08-01
There have been relatively few studies on the implications of the physical conditions experienced by cells during large volume (litres) cryopreservation - most studies have focused on the problem of cryopreservation of smaller volumes, typically up to 2 ml. This study explores the effects of ice growth by progressive solidification, generally seen during larger scale cryopreservation, on encapsulated liver hepatocyte spheroids, and it develops a method to reliably sample different regions across the frozen cores of samples experiencing progressive solidification. These issues are examined in the context of a Bioartificial Liver Device which requires cryopreservation of a 2 L volume in a strict cylindrical geometry for optimal clinical delivery. Progressive solidification cannot be avoided in this arrangement. In such a system optimal cryoprotectant concentrations and cooling rates are known. However, applying these parameters to a large volume is challenging due to the thermal mass and subsequent thermal lag. The specific impact of this to the cryopreservation outcome is required. Under conditions of progressive solidification, the spatial location of Encapsulated Liver Spheroids had a strong impact on post-thaw recovery. Cells in areas first and last to solidify demonstrated significantly impaired post-thaw function, whereas areas solidifying through the majority of the process exhibited higher post-thaw outcome. It was also found that samples where the ice thawed more rapidly had greater post-thaw viability 24 h post-thaw (75.7 ± 3.9% and 62.0 ± 7.2% respectively). These findings have implications for the cryopreservation of large volumes with a rigid shape and for the cryopreservation of a Bioartificial Liver Device. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Aseptic Handling of the MOMA Mass Spectrometer After Dry Heat Microbial Reduction
NASA Technical Reports Server (NTRS)
Lalime, Erin
2017-01-01
Mars Organic Molecule Analyzer Mass Spectrometer (MOMA-MS) is an instrument in the larger MOMA instrument suite for the European Space Agency (ESA) ExoMars 2020 Rover. As a life-detection instrument on a Mars landing mission, MOMA-MS has very stringent Planetary Protection (PP) bioburden requirements. Within the MOMA instrument suite, the hardware surfaces of the sample path must be cleaned to a level of 0.03 spore/sq m. To meet this requirement, a process called Dry Heat Microbial Reduction (DHMR) is used to decrease the number of viable spores by 4 orders of magnitude. Before DHMR, the hardware is handled using standard cleanroom practices, while after DHMR, all sample path surfaces must be handled aseptically when exposed. Aseptic handling of the sample path involves a number of strategies and protocols including working only in an aseptic ISO class 5 work space, limiting the amount of time of exposure, using sterile garmenting with sterile gloves, and using sterile tools. Before work begins, the aseptic workspace will be tested for bioburden and particle fallout, and all tools that will contact sample path surfaces must be sterilized. During the exposure activity, sterile garments will be worn, sterile tools will be handled in a 2 person set up so that the operator touches only the sterile tool and not the exterior surfaces of the sterile pouch, and the environment will be monitored with active and passive fallout for bioburden and particle levels. Any breach in the planetary protection cleanliness can necessitate repeating DHMR, which not only has significant cost and schedule implications, it also become a risk to hardware that is not rated for repeated long exposures to high temperatures.
2015-01-01
Ion mobility-mass spectrometry (IM-MS) allows the separation of ionized molecules based on their charge-to-surface area (IM) and mass-to-charge ratio (MS), respectively. The IM drift time data that is obtained is used to calculate the ion-neutral collision cross section (CCS) of the ionized molecule with the neutral drift gas, which is directly related to the ion conformation and hence molecular size and shape. Studying the conformational landscape of these ionized molecules computationally provides interpretation to delineate the potential structures that these CCS values could represent, or conversely, structural motifs not consistent with the IM data. A challenge in the IM-MS community is the ability to rapidly compute conformations to interpret natural product data, a class of molecules exhibiting a broad range of biological activity. The diversity of biological activity is, in part, related to the unique structural characteristics often observed for natural products. Contemporary approaches to structurally interpret IM-MS data for peptides and proteins typically utilize molecular dynamics (MD) simulations to sample conformational space. However, MD calculations are computationally expensive, they require a force field that accurately describes the molecule of interest, and there is no simple metric that indicates when sufficient conformational sampling has been achieved. Distance geometry is a computationally inexpensive approach that creates conformations based on sampling different pairwise distances between the atoms within the molecule and therefore does not require a force field. Progressively larger distance bounds can be used in distance geometry calculations, providing in principle a strategy to assess when all plausible conformations have been sampled. Our results suggest that distance geometry is a computationally efficient and potentially superior strategy for conformational analysis of natural products to interpret gas-phase CCS data. PMID:25360896
1989-01-01
This Uruguayan Decree sets forth regulations on the prevention and fighting of forest fires. Among other things, it does the following: 1) requires all public and private organizations, as well as all persons, to assist personally in and provide vehicles, machines, and tools for the fighting of forest fires; 2) requires the owners of property containing forests to maintain instruction in fighting fires for an adequate number of employees; 3) requires all forests to be kept cleared of vegetation capable of spreading fires and to have fire walls; 4) requires owners of forests larger than 30 hectares in size to present to the Forest Directorate an annual plan for forest fire defense; and 5) requires owners of forests larger than 30 hectares in size to maintain specified equipment for fighting fires. Persons violating the provisions of this Decree are subject to fines.
Sampled-data chain-observer design for a class of delayed nonlinear systems
NASA Astrophysics Data System (ADS)
Kahelras, M.; Ahmed-Ali, T.; Giri, F.; Lamnabhi-Lagarrigue, F.
2018-05-01
The problem of observer design is addressed for a class of triangular nonlinear systems with not-necessarily small delay and sampled output measurements. One more difficulty is that the system state matrix is dependent on the un-delayed output signal which is not accessible to measurement, making existing observers inapplicable. A new chain observer, composed of m elementary observers in series, is designed to compensate for output sampling and arbitrary large delays. The larger the time-delay the larger the number m. Each elementary observer includes an output predictor that is conceived to compensate for the effects of output sampling and a fractional delay. The predictors are defined by first-order ordinary differential equations (ODEs) much simpler than those of existing predictors which involve both output and state predictors. Using a small gain type analysis, sufficient conditions for the observer to be exponentially convergent are established in terms of the minimal number m of elementary observers and the maximum sampling interval.
Using long ssDNA polynucleotides to amplify STRs loci in degraded DNA samples
Pérez Santángelo, Agustín; Corti Bielsa, Rodrigo M.; Sala, Andrea; Ginart, Santiago; Corach, Daniel
2017-01-01
Obtaining informative short tandem repeat (STR) profiles from degraded DNA samples is a challenging task usually undermined by locus or allele dropouts and peak-high imbalances observed in capillary electrophoresis (CE) electropherograms, especially for those markers with large amplicon sizes. We hereby show that the current STR assays may be greatly improved for the detection of genetic markers in degraded DNA samples by using long single stranded DNA polynucleotides (ssDNA polynucleotides) as surrogates for PCR primers. These long primers allow a closer annealing to the repeat sequences, thereby reducing the length of the template required for the amplification in fragmented DNA samples, while at the same time rendering amplicons of larger sizes suitable for multiplex assays. We also demonstrate that the annealing of long ssDNA polynucleotides does not need to be fully complementary in the 5’ region of the primers, thus allowing for the design of practically any long primer sequence for developing new multiplex assays. Furthermore, genotyping of intact DNA samples could also benefit from utilizing long primers since their close annealing to the target STR sequences may overcome wrong profiling generated by insertions/deletions present between the STR region and the annealing site of the primers. Additionally, long ssDNA polynucleotides might be utilized in multiplex PCR assays for other types of degraded or fragmented DNA, e.g. circulating, cell-free DNA (ccfDNA). PMID:29099837
Kashiwagi, Tom; Maxwell, Elisabeth A; Marshall, Andrea D; Christensen, Ana B
2015-01-01
Sharks and rays are increasingly being identified as high-risk species for extinction, prompting urgent assessments of their local or regional populations. Advanced genetic analyses can contribute relevant information on effective population size and connectivity among populations although acquiring sufficient regional sample sizes can be challenging. DNA is typically amplified from tissue samples which are collected by hand spears with modified biopsy punch tips. This technique is not always popular due mainly to a perception that invasive sampling might harm the rays, change their behaviour, or have a negative impact on tourism. To explore alternative methods, we evaluated the yields and PCR success of DNA template prepared from the manta ray mucus collected underwater and captured and stored on a Whatman FTA™ Elute card. The pilot study demonstrated that mucus can be effectively collected underwater using toothbrush. DNA stored on cards was found to be reliable for PCR-based population genetics studies. We successfully amplified mtDNA ND5, nuclear DNA RAG1, and microsatellite loci for all samples and confirmed sequences and genotypes being those of target species. As the yields of DNA with the tested method were low, further improvements are desirable for assays that may require larger amounts of DNA, such as population genomic studies using emerging next-gen sequencing.
Vitamin D receptor gene and osteoporosis - author`s response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Looney, J.E.; Yoon, Hyun Koo; Fischer, M.
1996-04-01
We appreciate the comments of Dr. Nguyen et al. about our recent study, but we disagree with their suggestion that the lack of an association between low bone density and the BB VDR genotype, which we reported, is an artifact generated by the small sample size. Furthermore, our results are consistent with similar conclusions reached by a number of other investigators, as recently reported by Peacock. Peacock states {open_quotes}Taken as a whole, the results of studies outlined ... indicate that VDR alleles, cannot account for the major part of the heritable component of bone density as indicated by Morrison etmore » al.{close_quotes}. The majority of the 17 studies cited in this editorial could not confirm an association between the VDR genotype and the bone phenotype. Surely one cannot criticize this combined work as representing an artifact because of a too small sample size. We do not dispute the suggestion by Nguyen et al. that large sample sizes are required to analyze small biological effects. This is evident in both Peacock`s summary and in their own bone density studies. We did not design our study with a larger sample size because, based on the work of Morrison et al., we had hypothesized a large biological effect; large sample sizes are only needed for small biological effects. 4 refs.« less
Maxwell, Elisabeth A.; Marshall, Andrea D.; Christensen, Ana B.
2015-01-01
Sharks and rays are increasingly being identified as high-risk species for extinction, prompting urgent assessments of their local or regional populations. Advanced genetic analyses can contribute relevant information on effective population size and connectivity among populations although acquiring sufficient regional sample sizes can be challenging. DNA is typically amplified from tissue samples which are collected by hand spears with modified biopsy punch tips. This technique is not always popular due mainly to a perception that invasive sampling might harm the rays, change their behaviour, or have a negative impact on tourism. To explore alternative methods, we evaluated the yields and PCR success of DNA template prepared from the manta ray mucus collected underwater and captured and stored on a Whatman FTA™ Elute card. The pilot study demonstrated that mucus can be effectively collected underwater using toothbrush. DNA stored on cards was found to be reliable for PCR-based population genetics studies. We successfully amplified mtDNA ND5, nuclear DNA RAG1, and microsatellite loci for all samples and confirmed sequences and genotypes being those of target species. As the yields of DNA with the tested method were low, further improvements are desirable for assays that may require larger amounts of DNA, such as population genomic studies using emerging next-gen sequencing. PMID:26413431
The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education
ERIC Educational Resources Information Center
Slavin, Robert; Smith, Dewi
2009-01-01
Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…
Hennig, Cheryl; Cooper, David
2011-08-01
Histomorphometric aging methods report varying degrees of precision, measured through Standard Error of the Estimate (SEE). These techniques have been developed from variable samples sizes (n) and the impact of n on reported aging precision has not been rigorously examined in the anthropological literature. This brief communication explores the relation between n and SEE through a review of the literature (abstracts, articles, book chapters, theses, and dissertations), predictions based upon sampling theory and a simulation. Published SEE values for age prediction, derived from 40 studies, range from 1.51 to 16.48 years (mean 8.63; sd: 3.81 years). In general, these values are widely distributed for smaller samples and the distribution narrows as n increases--a pattern expected from sampling theory. For the two studies that have samples in excess of 200 individuals, the SEE values are very similar (10.08 and 11.10 years) with a mean of 10.59 years. Assuming this mean value is a 'true' characterization of the error at the population level, the 95% confidence intervals for SEE values from samples of 10, 50, and 150 individuals are on the order of ± 4.2, 1.7, and 1.0 years, respectively. While numerous sources of variation potentially affect the precision of different methods, the impact of sample size cannot be overlooked. The uncertainty associated with SEE values derived from smaller samples complicates the comparison of approaches based upon different methodology and/or skeletal elements. Meaningful comparisons require larger samples than have frequently been used and should ideally be based upon standardized samples. Copyright © 2011 Wiley-Liss, Inc.
Designing single- and multiple-shell sampling schemes for diffusion MRI using spherical code.
Cheng, Jian; Shen, Dinggang; Yap, Pew-Thian
2014-01-01
In diffusion MRI (dMRI), determining an appropriate sampling scheme is crucial for acquiring the maximal amount of information for data reconstruction and analysis using the minimal amount of time. For single-shell acquisition, uniform sampling without directional preference is usually favored. To achieve this, a commonly used approach is the Electrostatic Energy Minimization (EEM) method introduced in dMRI by Jones et al. However, the electrostatic energy formulation in EEM is not directly related to the goal of optimal sampling-scheme design, i.e., achieving large angular separation between sampling points. A mathematically more natural approach is to consider the Spherical Code (SC) formulation, which aims to achieve uniform sampling by maximizing the minimal angular difference between sampling points on the unit sphere. Although SC is well studied in the mathematical literature, its current formulation is limited to a single shell and is not applicable to multiple shells. Moreover, SC, or more precisely continuous SC (CSC), currently can only be applied on the continuous unit sphere and hence cannot be used in situations where one or several subsets of sampling points need to be determined from an existing sampling scheme. In this case, discrete SC (DSC) is required. In this paper, we propose novel DSC and CSC methods for designing uniform single-/multi-shell sampling schemes. The DSC and CSC formulations are solved respectively by Mixed Integer Linear Programming (MILP) and a gradient descent approach. A fast greedy incremental solution is also provided for both DSC and CSC. To our knowledge, this is the first work to use SC formulation for designing sampling schemes in dMRI. Experimental results indicate that our methods obtain larger angular separation and better rotational invariance than the generalized EEM (gEEM) method currently used in the Human Connectome Project (HCP).
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)
2000-01-01
Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.
Kelly-Cirino, Cassandra D; Curry, Patricia S; Marola, Jamie L; Helstrom, Niels K; Salfinger, Max
2016-11-01
OMNIgene®•SPUTUM (OM-S) is a sputum transport reagent designed to work with all tuberculosis diagnostics and eliminate the need for cold chain. The aim of this preliminary study was to assess the compatibility of OM-S-treated sputum with the Xpert® MTB/RIF assay. Fifty-five characterized sputa from the FIND TB Specimen Bank were used. Compatibility of OM-S was assessed for both Xpert sample preparation methods: H.1 protocol (sediment, n=25) and H.2 protocol (direct expectorate, n=30). All controls were prepared using the H.2 protocol. Results revealed 100% concordance of MTB/RIF results for all except the low-positive group in the H.1 study arm (n=10; 88% concordance). OM-S-treated sputa were successful in both protocols; if the Xpert buffer is not added during the H.2 procedure, sample viscosity may require repeat testing. Using OM-S could offer users flexibility in clinical testing algorithms. Larger compatibility studies are warranted, particularly with respect to MTB/RIF results for low-positive samples. Copyright © 2016 Elsevier Inc. All rights reserved.
Psychometric properties of the communication Confidence Rating Scale for Aphasia (CCRSA): phase 1.
Cherney, Leora R; Babbitt, Edna M; Semik, Patrick; Heinemann, Allen W
2011-01-01
Confidence is a construct that has not been explored previously in aphasia research. We developed the Communication Confidence Rating Scale for Aphasia (CCRSA) to assess confidence in communicating in a variety of activities and evaluated its psychometric properties using rating scale (Rasch) analysis. The CCRSA was administered to 21 individuals with aphasia before and after participation in a computer-based language therapy study. Person reliability of the 8-item CCRSA was .77. The 5-category rating scale demonstrated monotonic increases in average measures from low to high ratings. However, one item ("I follow news, sports, stories on TV/movies") misfit the construct defined by the other items (mean square infit = 1.69, item-measure correlation = .41). Deleting this item improved reliability to .79; the 7 remaining items demonstrated excellent fit to the underlying construct, although there was a modest ceiling effect in this sample. Pre- to posttreatment changes on the 7-item CCRSA measure were statistically significant using a paired samples t test. Findings support the reliability and sensitivity of the CCRSA in assessing participants' self-report of communication confidence. Further evaluation of communication confidence is required with larger and more diverse samples.
Label-Free, Flow-Imaging Methods for Determination of Cell Concentration and Viability.
Sediq, A S; Klem, R; Nejadnik, M R; Meij, P; Jiskoot, Wim
2018-05-30
To investigate the potential of two flow imaging microscopy (FIM) techniques (Micro-Flow Imaging (MFI) and FlowCAM) to determine total cell concentration and cell viability. B-lineage acute lymphoblastic leukemia (B-ALL) cells of 2 different donors were exposed to ambient conditions. Samples were taken at different days and measured with MFI, FlowCAM, hemocytometry and automated cell counting. Dead and live cells from a fresh B-ALL cell suspension were fractionated by flow cytometry in order to derive software filters based on morphological parameters of separate cell populations with MFI and FlowCAM. The filter sets were used to assess cell viability in the measured samples. All techniques gave fairly similar cell concentration values over the whole incubation period. MFI showed to be superior with respect to precision, whereas FlowCAM provided particle images with a higher resolution. Moreover, both FIM methods were able to provide similar results for cell viability as the conventional methods (hemocytometry and automated cell counting). FIM-based methods may be advantageous over conventional cell methods for determining total cell concentration and cell viability, as FIM measures much larger sample volumes, does not require labeling, is less laborious and provides images of individual cells.
A Mission Concept: Re-Entry Hopper-Aero-Space-Craft System on-Mars (REARM-Mars)
NASA Technical Reports Server (NTRS)
Davoodi, Faranak
2013-01-01
Future missions to Mars that would need a sophisticated lander, hopper, or rover could benefit from the REARM Architecture. The mission concept REARM Architecture is designed to provide unprecedented capabilities for future Mars exploration missions, including human exploration and possible sample-return missions, as a reusable lander, ascend/descend vehicle, refuelable hopper, multiple-location sample-return collector, laboratory, and a cargo system for assets and humans. These could all be possible by adding just a single customized Re-Entry-Hopper-Aero-Space-Craft System, called REARM-spacecraft, and a docking station at the Martian orbit, called REARM-dock. REARM could dramatically decrease the time and the expense required to launch new exploratory missions on Mars by making them less dependent on Earth and by reusing the assets already designed, built, and sent to Mars. REARM would introduce a new class of Mars exploration missions, which could explore much larger expanses of Mars in a much faster fashion and with much more sophisticated lab instruments. The proposed REARM architecture consists of the following subsystems: REARM-dock, REARM-spacecraft, sky-crane, secure-attached-compartment, sample-return container, agile rover, scalable orbital lab, and on-the-road robotic handymen.
Endo, Tetsuya; Hisamichi, Yohsuke; Kimura, Osamu; Kotaki, Yuichi; Kato, Yoshihisa; Ohta, Chiho; Koga, Nobuyuki; Haraguchi, Koichi
2009-11-01
We analyzed the total mercury (T-Hg) and stable isotopes of (13)C and (15)N in the muscle of spiny dogfish (Squalus acanthias) caught off the coast of Japan. The average body length of the female spiny dogfish sampled (94.9+/-20.2 cm, 50.5-131.0 cm, n=40) was significantly larger than that of the males sampled (77.8+/-10.8 cm, 55.5-94.0 cm, n=35), although the ages of the samples were unknown. The T-Hg concentration in the muscle samples rapidly increased after maturity in the females (larger than about 120 cm) and males (larger than about 90 cm), followed by a continued gradual increase. Contamination level of T-Hg in female muscle samples (0.387+/-0.378 microg(wet g)(-1), n=40) was slightly higher than that in male muscle samples (0.316+/-0.202 microg(wet g)(-1), n=35), probably due to the greater longevity of females. In contrast, the contamination level of T-Hg in females smaller than 94.0 cm in length (0.204+/-0.098 microg(wet g)(-1), n=20) was slightly lower than that in the males, probably due to the faster growth rate of females. Although the partial differential(13)C and partial differential(15)N values in the muscle samples increased with an increase in body length, there were no significant differences between the females (-17.2+/-0.4 per thousand and 12.4+/-0.9 per thousand, respectively) and males (-17.3+/-0.4 per thousand and 12.4+/-0.8 per thousand, respectively). A positive correlation was found between partial differential(13)C and partial differential(15)N values, suggesting trophic enrichment due to the growth.
A QUICK KEY TO THE SUBFAMILIES AND GENERA OF ANTS OF THE SAVANNAH RIVER SITE, AIKEN, SC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, D
2006-10-04
This taxonomic key was devised to support development of a Rapid Bioassessment Protocol using ants at the Savannah River Site. The emphasis is on ''rapid'' and, because the available keys contained a large number of genera not known to occur at the Savannah River Site, we found that the available keys were unwieldy. Because these keys contained more genera than we would likely encounter and because this larger number of genera required both more couplets in the key and often required examination of characters that are difficult to assess without higher magnifications (60X or higher) more time was required tomore » process samples. In developing this set of keys I recognize that the character sets used may lead to some errors but I believe that the error rate will be small and, for the purpose of rapid bioassessment, this error rate will be acceptable provided that overall sample sizes are adequate. Oliver and Beattie (1996a, 1996b) found that for rapid assessment of biodiversity the same results were found when identifications were done to morphospecies by people with minimal expertise as when the same data sets were identified by subject matter experts. Basset et al. (2004) concluded that it was not as important to correctly identify all species as it was to be sure that the study included as many functional groups as possible. If your study requires high levels of accuracy, it is highly recommended that when you key out a specimen and have any doubts concerning the identification, you should refer to keys in Bolton (1994) or to the other keys used to develop this area specific taxonomic key.« less
An improved initialization center k-means clustering algorithm based on distance and density
NASA Astrophysics Data System (ADS)
Duan, Yanling; Liu, Qun; Xia, Shuyin
2018-04-01
Aiming at the problem of the random initial clustering center of k means algorithm that the clustering results are influenced by outlier data sample and are unstable in multiple clustering, a method of central point initialization method based on larger distance and higher density is proposed. The reciprocal of the weighted average of distance is used to represent the sample density, and the data sample with the larger distance and the higher density are selected as the initial clustering centers to optimize the clustering results. Then, a clustering evaluation method based on distance and density is designed to verify the feasibility of the algorithm and the practicality, the experimental results on UCI data sets show that the algorithm has a certain stability and practicality.
Does body size affect a bird's sensitivity to patch size and landscape structure?
Winter, Maiken; Johnson, Douglas H.; Shaffer, Jill A.
2006-01-01
Larger birds are generally more strongly affected by habitat loss and fragmentation than are smaller ones because they require more resources and thus larger habitat patches. Consequently, conservation actions often favor the creation or protection of larger over smaller patches. However, in grassland systems the boundaries between a patch and the surrounding landscape, and thus the perceived size of a patch, can be indistinct. We investigated whether eight grassland bird species with different body sizes perceived variation in patch size and landscape structure in a consistent manner. Data were collected from surveys conducted in 44 patches of northern tallgrass prairie during 1998–2001. The response to patch size was very similar among species regardless of body size (density was little affected by patch size), except in the Greater Prairie-Chicken (Tympanuchus cupido), which showed a threshold effect and was not found in patches smaller than 140 ha. In landscapes containing 0%–30% woody vegetation, smaller species responded more negatively to increases in the percentage of woody vegetation than larger species, but above an apparent threshold of 30%, larger species were not detected. Further analyses revealed that the observed variation in responses to patch size and landscape structure among species was not solely due to body size per se, but to other differences among species. These results indicate that a stringent application of concepts requiring larger habitat patches for larger species appears to limit the number of grassland habitats that can be protected and may not always be the most effective conservation strategy.
Migaszewski, Z.M.; Lamothe, P.J.; Crock, J.G.; Galuszka, A.; Dolegowska, S.
2011-01-01
Trace element concentrations in plant bioindicators are often determined to assess the quality of the environment. Instrumental methods used for trace element determination require digestion of samples. There are different methods of sample preparation for trace element analysis, and the selection of the best method should be fitted for the purpose of a study. Our hypothesis is that the method of sample preparation is important for interpretation of the results. Here we compare the results of 36 element determinations performed by ICP-MS on ashed and on acid-digested (HNO3, H2O2) samples of two moss species (Hylocomium splendens and Pleurozium schreberi) collected in Alaska and in south-central Poland. We found that dry ashing of the moss samples prior to analysis resulted in considerably lower detection limits of all the elements examined. We also show that this sample preparation technique facilitated the determination of interregional and interspecies differences in the chemistry of trace elements. Compared to the Polish mosses, the Alaskan mosses displayed more positive correlations of the major rock-forming elements with ash content, reflecting those elements' geogenic origin. Of the two moss species, P. schreberi from both Alaska and Poland was also highlighted by a larger number of positive element pair correlations. The cluster analysis suggests that the more uniform element distribution pattern of the Polish mosses primarily reflects regional air pollution sources. Our study has shown that the method of sample preparation is an important factor in statistical interpretation of the results of trace element determinations. ?? 2010 Springer-Verlag.
Perilymph composition in scala tympani of the cochlea: influence of cerebrospinal fluid.
Hara, A; Salt, A N; Thalmann, R
1989-11-01
A commonly used technique to obtain cochlear perilymph for analysis has been the aspiration of samples through the round window membrane. The present study has investigated the influence of the volume withdrawn on sample composition in the guinea pig. Samples of less than 200 nl in volume taken through the round window showed relatively high glycine content, comparable to the level found in samples taken from scala vestibuli. If larger volumes are withdrawn, lower glycine levels are observed. This is consistent with cerebrospinal fluid (having a low glycine content) being drawn into scala tympani through the cochlear aqueduct and contaminating the sample. The existence of a concentration difference for glycine between scala tympani perilymph and cerebrospinal fluid suggests the physiologic communication across the cochlear aqueduct is relatively small in this species. The observation of considerable exchange between cerebrospinal fluid and perilymph, as reported in some studies, is more likely to be an artifact of the experimental procedures, rather than of physiologic significance. Alternative sampling procedures have been evaluated which allow larger volumes of uncontaminated scala tympani perilymph to be collected.
Katzka, David A; Geno, Debra M; Ravi, Anupama; Smyrk, Thomas C; Lao-Sirieix, Pierre; Miremadi, Ahmed; Miramedi, Ahmed; Debiram, Irene; O'Donovan, Maria; Kita, Hirohito; Kephart, Gail M; Kryzer, Lori A; Camilleri, Michael; Alexander, Jeffrey A; Fitzgerald, Rebecca C
2015-01-01
Management of eosinophilic esophagitis (EoE) requires repeated endoscopic collection of mucosal samples to assess disease activity and response to therapy. An easier and less expensive means of monitoring of EoE is required. We compared the accuracy, safety, and tolerability of sample collection via Cytosponge (an ingestible gelatin capsule comprising compressed mesh attached to a string) with those of endoscopy for assessment of EoE. Esophageal tissues were collected from 20 patients with EoE (all with dysphagia, 15 with stricture, 13 with active EoE) via Cytosponge and then by endoscopy. Number of eosinophils/high-power field and levels of eosinophil-derived neurotoxin were determined; hematoxylin-eosin staining was performed. We compared the adequacy, diagnostic accuracy, safety, and patient preference for sample collection via Cytosponge vs endoscopy procedures. All 20 samples collected by Cytosponge were adequate for analysis. By using a cutoff value of 15 eosinophils/high power field, analysis of samples collected by Cytosponge identified 11 of the 13 individuals with active EoE (83%); additional features such as abscesses were also identified. Numbers of eosinophils in samples collected by Cytosponge correlated with those in samples collected by endoscopy (r = 0.50, P = .025). Analysis of tissues collected by Cytosponge identified 4 of the 7 patients without active EoE (57% specificity), as well as 3 cases of active EoE not identified by analysis of endoscopy samples. Including information on level of eosinophil-derived neurotoxin did not increase the accuracy of diagnosis. No complications occurred during the Cytosponge procedure, which was preferred by all patients, compared with endoscopy. In a feasibility study, the Cytosponge is a safe and well-tolerated method for collecting near mucosal specimens. Analysis of numbers of eosinophils/high-power field identified patients with active EoE with 83% sensitivity. Larger studies are needed to establish the efficacy and safety of this method of esophageal tissue collection. ClinicalTrials.gov number: NCT01585103. Copyright © 2015 AGA Institute. Published by Elsevier Inc. All rights reserved.
Teaching self-control to small groups of dually diagnosed adults.
Dixon, M R; Holcomb, S
2000-01-01
The present study examined the use of a progressive delay procedure to teach self-control to two groups of dually diagnosed adults. When given a choice between an immediate smaller reinforcer and a larger delayed reinforcer, both groups chose the smaller reinforcer during baseline. During treatment, progressive increases in work requirements for gaining access to a larger reinforcer resulted in both groups selecting larger delayed reinforcers. The results are discussed with respect to increasing cooperative work behavior and self-control.
Stullken, L.E.; Stamer, J.K.; Carr, J.E.
1987-01-01
The High Plains of western Kansas was one of 14 areas selected for preliminary groundwater quality reconnaissance by the U.S. Geological Survey 's Toxic Waste--Groundwater Contamination Program. The specific objective was to evaluate the effects of land used for agriculture (irrigated cropland and non-irrigated rangeland) on the water in the High Plains aquifer. Conceptual inferences, based on the information available, would lead one to expect groundwater beneath irrigated cropland to contain larger concentrations of sodium, sulfate, chloride, nitrite plus nitrate, and some water soluble pesticides than water beneath non-irrigated land (range-land) The central part of the Great Bend Prairie, an area of about 1,800 sq mi overlying the High Plains aquifer in south-central Kansas, was selected for the study of agricultural land use because it has sand soils, a shallow water table, relatively large annual precipitation, and includes large areas that are exclusively irrigated cropland or non-irrigated rangeland. As determined by a two-tailed Wilcoxon rank-sum test, concentrations of sodium and alkalinity were significantly larger at the 95% confidence level for water samples from beneath irrigated cropland than from beneath rangeland. No statistically significant difference in concentrations of sulfate, chloride, nitrite plus nitrate, and ammonia, was detected. Concentrations of 2,4-D found in water samples from beneath the rangeland were larger at the 99% confidence level as compared to concentrations of 2,4-D in samples from beneath irrigated cropland. Larger concentrations of sodium and alkalinity were found in water beneath irrigated cropland, and the largest concentration of the pesticide atrazine (triazines were found in three samples) was found in water from the only irrigation well sampled. The sodium and atrazine concentrations found in water from the irrigation well support the premise that water-level drawdown develops under irrigated fields. This diverts the natural groundwater flow patterns, so that pumpage may cause recycling and subsequent concentration of leachates from the land surface. (Author 's abstract)
The centennial Evolution of Geomagnetic Activity revisited
NASA Astrophysics Data System (ADS)
Mursula, K.; Martini, D.
Geomagnetic activity is one of the most important heliospheric parameters and the most reliable indicator of decadal and centennial changes in solar activity Here we study the centennial change in geomagnetic activity using the newly proposed IHV Inter-Hour Variability index We correct the earlier estimates on the centennial increase by taking into account the effect of the fact that the sampling of the magnetic field changed from one sample per hour to hourly means in the first years of the previous century Since the IHV index is a variability index the larger variability in the case of hourly sampling leads without due correction to excessively large values in the beginning of the century and an underestimated centennial increase We discuss two ways to extract the necessary sampling calibration factors and show that they agree very well with each other The effect of calibration is especially large at the mid-latitude CLH FRD station where the centennial increase changes from only 6 to 24-25 due to calibration Sampling calibration also leads to a larger centennial increase of global geomagnetic activity based on the IHV index The results verify a significant centennial increase in global geomagnetic activity in a qualitative agreement with the aa index although a quantitative comparison is not warranted We also find that the centennial increase has a rather strong and curious latitudinal dependence It is largest at high latitudes Quite unexpectedly it is larger at low than mid-latitudes These new findings indicate interesting long-term changes in the
Conservation triage or injurious neglect in endangered species recovery
Gerber, Leah R.
2016-01-01
Listing endangered and threatened species under the US Endangered Species Act is presumed to offer a defense against extinction and a solution to achieve recovery of imperiled populations, but only if effective conservation action ensues after listing occurs. The amount of government funding available for species protection and recovery is one of the best predictors of successful recovery; however, government spending is both insufficient and highly disproportionate among groups of species, and there is significant discrepancy between proposed and actualized budgets across species. In light of an increasing list of imperiled species requiring evaluation and protection, an explicit approach to allocating recovery funds is urgently needed. Here I provide a formal decision-theoretic approach focusing on return on investment as an objective and a transparent mechanism to achieve the desired recovery goals. I found that less than 25% of the $1.21 billion/year needed for implementing recovery plans for 1,125 species is actually allocated to recovery. Spending in excess of the recommended recovery budget does not necessarily translate into better conservation outcomes. Rather, elimination of only the budget surplus for “costly yet futile” recovery plans can provide sufficient funding to erase funding deficits for more than 180 species. Triage by budget compression provides better funding for a larger sample of species, and a larger sample of adequately funded recovery plans should produce better outcomes even if by chance. Sharpening our focus on deliberate decision making offers the potential to achieve desired outcomes in avoiding extinction for Endangered Species Act-listed species. PMID:26976572
A Coffee Ring Aptasensor for Rapid Protein Detection
Wen, Jessica T.; Ho, Chih-Ming; Lillehoj, Peter B.
2013-01-01
We introduce a new biosensing platform for rapid protein detection that combines one of the simplest methods for biomolecular concentration, coffee ring formation, with a sensitive aptamer-based optical detection scheme. In this approach, aptamer beacons are utilized for signal transduction where a fluorescence signal is emitted in the presence of the target molecule. Signal amplification is achieved by concentrating aptamer-target complexes within liquid droplets, resulting in the formation of coffee ring “spots”. Surfaces with various chemical coatings were utilized to investigate the correlation between surface hydrophobicity, concentration efficiency and signal amplification. Based on our results, we found that the increase in coffee ring diameter with larger droplet volumes is independent of surface hydrophobicity. Furthermore, we show that highly hydrophobic surfaces produce enhanced particle concentration, via coffee ring formation, resulting in signal intensities 6-fold greater than those on hydrophilic surfaces. To validate this biosensing platform for the detection of clinical samples, we detected α-thrombin in human serum and 4x diluted whole blood. Based on our results, coffee ring spots produced detection signals 40x larger than samples in liquid droplets. Additionally, this biosensor exhibits a lower limit of detection of 2 ng/mL (54 pM) in serum, and 4 ng/mL (105 pM) in blood. Based on its simplicity and high performance, this platform demonstrates immense potential as an inexpensive diagnostic tool for the detection of disease biomarkers, particularly for use in developing countries that lack the resources and facilities required for conventional biodetection practices. PMID:23540796
Modularity of Protein Folds as a Tool for Template-Free Modeling of Structures.
Vallat, Brinda; Madrid-Aliste, Carlos; Fiser, Andras
2015-08-01
Predicting the three-dimensional structure of proteins from their amino acid sequences remains a challenging problem in molecular biology. While the current structural coverage of proteins is almost exclusively provided by template-based techniques, the modeling of the rest of the protein sequences increasingly require template-free methods. However, template-free modeling methods are much less reliable and are usually applicable for smaller proteins, leaving much space for improvement. We present here a novel computational method that uses a library of supersecondary structure fragments, known as Smotifs, to model protein structures. The library of Smotifs has saturated over time, providing a theoretical foundation for efficient modeling. The method relies on weak sequence signals from remotely related protein structures to create a library of Smotif fragments specific to the target protein sequence. This Smotif library is exploited in a fragment assembly protocol to sample decoys, which are assessed by a composite scoring function. Since the Smotif fragments are larger in size compared to the ones used in other fragment-based methods, the proposed modeling algorithm, SmotifTF, can employ an exhaustive sampling during decoy assembly. SmotifTF successfully predicts the overall fold of the target proteins in about 50% of the test cases and performs competitively when compared to other state of the art prediction methods, especially when sequence signal to remote homologs is diminishing. Smotif-based modeling is complementary to current prediction methods and provides a promising direction in addressing the structure prediction problem, especially when targeting larger proteins for modeling.
Sia, I G; Wilson, J A; Espy, M J; Paya, C V; Smith, T F
2000-02-01
Detection of cytomegalovirus (CMV) DNA in blood by PCR is a sensitive method for the detection of infection in patients posttransplantation. The test, however, has low specificity for the identification of overt CMV disease. Quantitative CMV PCR has been shown to overcome this shortcoming. The COBAS AMPLICOR CMV MONITOR test was evaluated by using consecutive serum and peripheral blood mononuclear cell (PBMN) samples from liver transplant patients. Twenty-five patients had CMV viremia (by shell vial cell culture assay) and/or tissue-invasive disease (by biopsy); 20 had no active infection. A total of 262 serum and 62 PBMN specimens were tested. Of 159 serum specimens from patients with overt CMV infection, the COBAS assay detected CMV DNA in 21 patients (sensitivity, 84%). Only 1 of 103 samples from patients with no evidence of active infection had detectable CMV DNA (341 copies/ml). By comparison of 62 matching serum and PBMN samples by the same assay, 12 PBMN samples were exclusively positive, whereas only 2 serum samples were exclusively positive (P < 0.05). At the time of clinical CMV infection, viral copy numbers were higher in PBMNs than serum from four of five patients. The COBAS AMPLICOR CMV MONITOR test is a sensitive and specific test for the quantitative detection of CMV DNA in blood. Clinical applications of the assay will require further validation with samples from a larger population of transplant patients.
Sia, Irene G.; Wilson, Jennie A.; Espy, Mark J.; Paya, Carlos V.; Smith, Thomas F.
2000-01-01
Detection of cytomegalovirus (CMV) DNA in blood by PCR is a sensitive method for the detection of infection in patients posttransplantation. The test, however, has low specificity for the identification of overt CMV disease. Quantitative CMV PCR has been shown to overcome this shortcoming. The COBAS AMPLICOR CMV MONITOR test was evaluated by using consecutive serum and peripheral blood mononuclear cell (PBMN) samples from liver transplant patients. Twenty-five patients had CMV viremia (by shell vial cell culture assay) and/or tissue-invasive disease (by biopsy); 20 had no active infection. A total of 262 serum and 62 PBMN specimens were tested. Of 159 serum specimens from patients with overt CMV infection, the COBAS assay detected CMV DNA in 21 patients (sensitivity, 84%). Only 1 of 103 samples from patients with no evidence of active infection had detectable CMV DNA (341 copies/ml). By comparison of 62 matching serum and PBMN samples by the same assay, 12 PBMN samples were exclusively positive, whereas only 2 serum samples were exclusively positive (P < 0.05). At the time of clinical CMV infection, viral copy numbers were higher in PBMNs than serum from four of five patients. The COBAS AMPLICOR CMV MONITOR test is a sensitive and specific test for the quantitative detection of CMV DNA in blood. Clinical applications of the assay will require further validation with samples from a larger population of transplant patients. PMID:10655353
Sampling algorithms for validation of supervised learning models for Ising-like systems
NASA Astrophysics Data System (ADS)
Portman, Nataliya; Tamblyn, Isaac
2017-12-01
In this paper, we build and explore supervised learning models of ferromagnetic system behavior, using Monte-Carlo sampling of the spin configuration space generated by the 2D Ising model. Given the enormous size of the space of all possible Ising model realizations, the question arises as to how to choose a reasonable number of samples that will form physically meaningful and non-intersecting training and testing datasets. Here, we propose a sampling technique called ;ID-MH; that uses the Metropolis-Hastings algorithm creating Markov process across energy levels within the predefined configuration subspace. We show that application of this method retains phase transitions in both training and testing datasets and serves the purpose of validation of a machine learning algorithm. For larger lattice dimensions, ID-MH is not feasible as it requires knowledge of the complete configuration space. As such, we develop a new ;block-ID; sampling strategy: it decomposes the given structure into square blocks with lattice dimension N ≤ 5 and uses ID-MH sampling of candidate blocks. Further comparison of the performance of commonly used machine learning methods such as random forests, decision trees, k nearest neighbors and artificial neural networks shows that the PCA-based Decision Tree regressor is the most accurate predictor of magnetizations of the Ising model. For energies, however, the accuracy of prediction is not satisfactory, highlighting the need to consider more algorithmically complex methods (e.g., deep learning).
Calculation of absolute protein-ligand binding free energy using distributed replica sampling.
Rodinger, Tomas; Howell, P Lynne; Pomès, Régis
2008-10-21
Distributed replica sampling [T. Rodinger et al., J. Chem. Theory Comput. 2, 725 (2006)] is a simple and general scheme for Boltzmann sampling of conformational space by computer simulation in which multiple replicas of the system undergo a random walk in reaction coordinate or temperature space. Individual replicas are linked through a generalized Hamiltonian containing an extra potential energy term or bias which depends on the distribution of all replicas, thus enforcing the desired sampling distribution along the coordinate or parameter of interest regardless of free energy barriers. In contrast to replica exchange methods, efficient implementation of the algorithm does not require synchronicity of the individual simulations. The algorithm is inherently suited for large-scale simulations using shared or heterogeneous computing platforms such as a distributed network. In this work, we build on our original algorithm by introducing Boltzmann-weighted jumping, which allows moves of a larger magnitude and thus enhances sampling efficiency along the reaction coordinate. The approach is demonstrated using a realistic and biologically relevant application; we calculate the standard binding free energy of benzene to the L99A mutant of T4 lysozyme. Distributed replica sampling is used in conjunction with thermodynamic integration to compute the potential of mean force for extracting the ligand from protein and solvent along a nonphysical spatial coordinate. Dynamic treatment of the reaction coordinate leads to faster statistical convergence of the potential of mean force than a conventional static coordinate, which suffers from slow transitions on a rugged potential energy surface.
Calculation of absolute protein-ligand binding free energy using distributed replica sampling
NASA Astrophysics Data System (ADS)
Rodinger, Tomas; Howell, P. Lynne; Pomès, Régis
2008-10-01
Distributed replica sampling [T. Rodinger et al., J. Chem. Theory Comput. 2, 725 (2006)] is a simple and general scheme for Boltzmann sampling of conformational space by computer simulation in which multiple replicas of the system undergo a random walk in reaction coordinate or temperature space. Individual replicas are linked through a generalized Hamiltonian containing an extra potential energy term or bias which depends on the distribution of all replicas, thus enforcing the desired sampling distribution along the coordinate or parameter of interest regardless of free energy barriers. In contrast to replica exchange methods, efficient implementation of the algorithm does not require synchronicity of the individual simulations. The algorithm is inherently suited for large-scale simulations using shared or heterogeneous computing platforms such as a distributed network. In this work, we build on our original algorithm by introducing Boltzmann-weighted jumping, which allows moves of a larger magnitude and thus enhances sampling efficiency along the reaction coordinate. The approach is demonstrated using a realistic and biologically relevant application; we calculate the standard binding free energy of benzene to the L99A mutant of T4 lysozyme. Distributed replica sampling is used in conjunction with thermodynamic integration to compute the potential of mean force for extracting the ligand from protein and solvent along a nonphysical spatial coordinate. Dynamic treatment of the reaction coordinate leads to faster statistical convergence of the potential of mean force than a conventional static coordinate, which suffers from slow transitions on a rugged potential energy surface.
CASP10-BCL::Fold efficiently samples topologies of large proteins.
Heinze, Sten; Putnam, Daniel K; Fischer, Axel W; Kohlmann, Tim; Weiner, Brian E; Meiler, Jens
2015-03-01
During CASP10 in summer 2012, we tested BCL::Fold for prediction of free modeling (FM) and template-based modeling (TBM) targets. BCL::Fold assembles the tertiary structure of a protein from predicted secondary structure elements (SSEs) omitting more flexible loop regions early on. This approach enables the sampling of conformational space for larger proteins with more complex topologies. In preparation of CASP11, we analyzed the quality of CASP10 models throughout the prediction pipeline to understand BCL::Fold's ability to sample the native topology, identify native-like models by scoring and/or clustering approaches, and our ability to add loop regions and side chains to initial SSE-only models. The standout observation is that BCL::Fold sampled topologies with a GDT_TS score > 33% for 12 of 18 and with a topology score > 0.8 for 11 of 18 test cases de novo. Despite the sampling success of BCL::Fold, significant challenges still exist in clustering and loop generation stages of the pipeline. The clustering approach employed for model selection often failed to identify the most native-like assembly of SSEs for further refinement and submission. It was also observed that for some β-strand proteins model refinement failed as β-strands were not properly aligned to form hydrogen bonds removing otherwise accurate models from the pool. Further, BCL::Fold samples frequently non-natural topologies that require loop regions to pass through the center of the protein. © 2015 Wiley Periodicals, Inc.
Challenges of Achieving 2012 IECC Air Sealing Requirements in Multifamily Dwellings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klocke, S.; Faakye, O.; Puttagunta, S.
2014-10-01
While previous versions of the International Energy Conservation Code (IECC) have included provisions to improve the air tightness of dwellings, for the first time, the 2012 IECC mandates compliance verification through blower door testing. Simply completing the Air Barrier and Insulation Installation checklist through visual inspection is no longer sufficient by itself. In addition, the 2012 IECC mandates a significantly stricter air sealing requirement. In Climate Zones 3 through 8, air leakage may not exceed 3 ACH50, which is a significant reduction from the 2009 IECC requirement of 7 ACH50. This requirement is for all residential buildings, which includes low-risemore » multifamily dwellings. While this air leakage rate requirement is an important component to achieving an efficient building thermal envelope, currently, the code language doesn't explicitly address differences between single family and multifamily applications. In addition, the 2012 IECC does not provide an option to sample dwellings for larger multifamily buildings, so compliance would have to be verified on every unit. With compliance with the 2012 IECC air leakage requirements on the horizon, several of Consortium for Advanced Residential Building's (CARB’s) multifamily builder partners are evaluating how best to comply with this requirement. Builders are not sure whether it is more practical or beneficial to simply pay for guarded testing or to revise their air sealing strategies to improve compartmentalization to comply with code requirements based on unguarded blower door testing. This report summarizes CARB's research that was conducted to assess the feasibility of meeting the 2012 IECC air leakage requirements in 3 multifamily buildings.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2014-11-01
While previous versions of the International Energy Conservation Code (IECC) have included provisions to improve the air tightness of dwellings, for the first time, the 2012 IECC mandates compliance verification through blower door testing. Simply completing the Air Barrier and Insulation Installation checklist through visual inspection is no longer sufficient by itself. In addition, the 2012 IECC mandates a significantly stricter air sealing requirement. In Climate Zones 3 through 8, air leakage may not exceed 3 ACH50, which is a significant reduction from the 2009 IECC requirement of 7 ACH50. This requirement is for all residential buildings, which includes low-risemore » multifamily dwellings. While this air leakage rate requirement is an important component to achieving an efficient building thermal envelope, currently, the code language doesn't explicitly address differences between single family and multifamily applications. In addition, the 2012 IECC does not provide an option to sample dwellings for larger multifamily buildings, so compliance would have to be verified on every unit. With compliance with the 2012 IECC air leakage requirements on the horizon, several of CARB's multifamily builder partners are evaluating how best to comply with this requirement. Builders are not sure whether it is more practical or beneficial to simply pay for guarded testing or to revise their air sealing strategies to improve compartmentalization to comply with code requirements based on unguarded blower door testing. This report summarizes CARB's research that was conducted to assess the feasibility of meeting the 2012 IECC air leakage requirements in 3 multifamily buildings.« less
Challenges of Achieving 2012 IECC Air Sealing Requirements in Multifamily Dwellings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klocke, S.; Faakye, O.; Puttagunta, S.
2014-10-01
While previous versions of the International Energy Conservation Code (IECC) have included provisions to improve the air tightness of dwellings, for the first time, the 2012 IECC mandates compliance verification through blower door testing. Simply completing the Air Barrier and Insulation Installation checklist through visual inspection is no longer sufficient by itself. In addition, the 2012 IECC mandates a significantly stricter air sealing requirement. In Climate Zones 3 through 8, air leakage may not exceed 3 ACH50, which is a significant reduction from the 2009 IECC requirement of 7 ACH50. This requirement is for all residential buildings, which includes low-risemore » multifamily dwellings. While this air leakage rate requirement is an important component to achieving an efficient building thermal envelope, currently, the code language doesn't explicitly address differences between single family and multifamily applications. In addition, the 2012 IECC does not provide an option to sample dwellings for larger multifamily buildings, so compliance would have to be verified on every unit. With compliance with the 2012 IECC air leakage requirements on the horizon, several of CARB's multifamily builder partners are evaluating how best to comply with this requirement. Builders are not sure whether it is more practical or beneficial to simply pay for guarded testing or to revise their air sealing strategies to improve compartmentalization to comply with code requirements based on unguarded blower door testing. This report summarizes CARB's research that was conducted to assess the feasibility of meeting the 2012 IECC air leakage requirements in 3 multifamily buildings.« less
NASA Astrophysics Data System (ADS)
Lai, Xiaoming; Zhu, Qing; Zhou, Zhiwen; Liao, Kaihua
2017-12-01
In this study, seven random combination sampling strategies were applied to investigate the uncertainties in estimating the hillslope mean soil water content (SWC) and correlation coefficients between the SWC and soil/terrain properties on a tea + bamboo hillslope. One of the sampling strategies is the global random sampling and the other six are the stratified random sampling on the top, middle, toe, top + mid, top + toe and mid + toe slope positions. When each sampling strategy was applied, sample sizes were gradually reduced and each sampling size contained 3000 replicates. Under each sampling size of each sampling strategy, the relative errors (REs) and coefficients of variation (CVs) of the estimated hillslope mean SWC and correlation coefficients between the SWC and soil/terrain properties were calculated to quantify the accuracy and uncertainty. The results showed that the uncertainty of the estimations decreased as the sampling size increasing. However, larger sample sizes were required to reduce the uncertainty in correlation coefficient estimation than in hillslope mean SWC estimation. Under global random sampling, 12 randomly sampled sites on this hillslope were adequate to estimate the hillslope mean SWC with RE and CV ≤10%. However, at least 72 randomly sampled sites were needed to ensure the estimated correlation coefficients with REs and CVs ≤10%. Comparing with all sampling strategies, reducing sampling sites on the middle slope had the least influence on the estimation of hillslope mean SWC and correlation coefficients. Under this strategy, 60 sites (10 on the middle slope and 50 on the top and toe slopes) were enough to ensure the estimated correlation coefficients with REs and CVs ≤10%. This suggested that when designing the SWC sampling, the proportion of sites on the middle slope can be reduced to 16.7% of the total number of sites. Findings of this study will be useful for the optimal SWC sampling design.
Effect of abdominopelvic abscess drain size on drainage time and probability of occlusion.
Rotman, Jessica A; Getrajdman, George I; Maybody, Majid; Erinjeri, Joseph P; Yarmohammadi, Hooman; Sofocleous, Constantinos T; Solomon, Stephen B; Boas, F Edward
2017-04-01
The purpose of this study is to determine whether larger abdominopelvic abscess drains reduce the time required for abscess resolution or the probability of tube occlusion. 144 consecutive patients who underwent abscess drainage at a single institution were reviewed retrospectively. Larger initial drain size did not reduce drainage time, drain occlusion, or drain exchanges (P > .05). Subgroup analysis did not find any type of collection that benefitted from larger drains. A multivariate model predicting drainage time showed that large collections (>200 mL) required 16 days longer drainage time than small collections (<50 mL). Collections with a fistula to bowel required 17 days longer drainage time than collections without a fistula. Initial drain size and the viscosity of the fluid in the collection had no significant effect on drainage time in the multivariate model. 8 F drains are adequate for initial drainage of most serous and serosanguineous collections. 10 F drains are adequate for initial drainage of most purulent or bloody collections. Copyright © 2016 Elsevier Inc. All rights reserved.
Infusion pressure and pain during microneedle injection into skin of human subjects.
Gupta, Jyoti; Park, Sohyun S; Bondy, Brian; Felner, Eric I; Prausnitz, Mark R
2011-10-01
Infusion into skin using hollow microneedles offers an attractive alternative to hypodermic needle injections. However, the fluid mechanics and pain associated with injection into skin using a microneedle have not been studied in detail before. Here, we report on the effect of microneedle insertion depth into skin, partial needle retraction, fluid infusion flow rate and the co-administration of hyaluronidase on infusion pressure during microneedle-based saline infusion, as well as on associated pain in human subjects. Infusion of up to a few hundred microliters of fluid required pressures of a few hundred mmHg, caused little to no pain, and showed weak dependence on infusion parameters. Infusion of larger volumes up to 1 mL required pressures up to a few thousand mmHg, but still usually caused little pain. In general, injection of larger volumes of fluid required larger pressures and application of larger pressures caused more pain, although other experimental parameters also played a significant role. Among the intradermal microneedle groups, microneedle length had little effect; microneedle retraction lowered infusion pressure but increased pain; lower flow rate reduced infusion pressure and kept pain low; and use of hyaluronidase also lowered infusion pressure and kept pain low. We conclude that microneedles offer a simple method to infuse fluid into the skin that can be carried out with little to no pain. Copyright © 2011 Elsevier Ltd. All rights reserved.
Tanaka, Toshiaki; Nozawa, Hiroaki; Kawai, Kazushige; Hata, Keisuke; Kiyomatsu, Tomomichi; Nishikawa, Takeshi; Otani, Kensuke; Sasaki, Kazuhito; Murono, Koji; Watanabe, Toshiaki
2017-01-01
Colorectal neuroendocrine tumors (NET) are a rare manifestation of colorectal neoplasia, requiring for radical dissection of the regional lymph nodes along with colorectal resection similar to that required for colorectal cancer. However, thus far, no reports have described the ability of computed tomography (CT) to predict lymph node involvement. In this study, we revealed the prediction rate of lymph node metastasis using contrast-enhanced CT. A total of 21 patients with colorectal NET undergoing colorectal resection were recruited from January 2010 to June 2016. We compared the CT findings between samples with or without pathologically proven lymph node metastasis, in each field (pericolic/perirectal and intermediate nodes). Within the pericolic/perirectal field, any lymph node larger than 5 mm in the CT images was a predictive indicator of lymph node metastasis with a sensitivity, specificity, and area under ROC curve (AUC) of 66.7%, 87.5%, and 0.844, respectively. Within the intermediate field, any visible lymph node on the CT was a predictive indicator of lymph node metastasis with a sensitivity, specificity, and AUC of 100%, 76.4%, and 0.890, respectively. In addition, when we observed lymph nodes larger than 3 mm on the CT images, the sensitivity and specificity were 100% and 82.4%, respectively, with an AUC of 0.8971. CT images provide predictive information for lymph node metastasis with a high rate of accuracy. Copyright© 2017, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.
Adhesive quality inspection of wind rotor blades using thermography
NASA Astrophysics Data System (ADS)
Li, Xiaoli; Sun, Jiangang; Shen, Jingling; Wang, Xun; Zhang, Cunlin; Zhao, Yuejin
2018-04-01
Wind power is playing an increasingly important role in ensuring electrical safety for human beings. Because wind rotor blades are getting larger and larger in order to harvest wind energy more efficiently, there is a growing demand for nondestructive testing. Due to the glue structure of rotor blades, adhesive quality evaluation is needed. In this study, three adhesive samples with a wall thickness of 13mm, 28mm or 31mm were each designed with a different adhesive situation. The transmission thermography was applied to inspect the samples. The results illustrate that this method is effective to inspect adhesive quality of wind rotor blades.
Cryobiopsy: should this be used in place of endobronchial forceps biopsies?
Rubio, Edmundo R; le, Susanti R; Whatley, Ralph E; Boyd, Michael B
2013-01-01
Forceps biopsies of airway lesions have variable yields. The yield increases when combining techniques in order to collect more material. With the use of cryotherapy probes (cryobiopsy) larger specimens can be obtained, resulting in an increase in the diagnostic yield. However, the utility and safety of cryobiopsy with all types of lesions, including flat mucosal lesions, is not established. Demonstrate the utility/safety of cryobiopsy versus forceps biopsy to sample exophytic and flat airway lesions. Teaching hospital-based retrospective analysis. Retrospective analysis of patients undergoing cryobiopsies (singly or combined with forceps biopsies) from August 2008 through August 2010. Statistical Analysis. Wilcoxon signed-rank test. The comparative analysis of 22 patients with cryobiopsy and forceps biopsy of the same lesion showed the mean volumes of material obtained with cryobiopsy were significantly larger (0.696 cm(3) versus 0.0373 cm(3), P = 0.0014). Of 31 cryobiopsies performed, one had minor bleeding. Cryopbiopsy allowed sampling of exophytic and flat lesions that were located centrally or distally. Cryobiopsies were shown to be safe, free of artifact, and provided a diagnostic yield of 96.77%. Cryobiopsy allows safe sampling of exophytic and flat airway lesions, with larger specimens, excellent tissue preservation and high diagnostic accuracy.
Chetty, Raj; Friedman, John N.; Olsen, Tore; Pistaferri, Luigi
2011-01-01
We show that the effects of taxes on labor supply are shaped by interactions between adjustment costs for workers and hours constraints set by firms. We develop a model in which firms post job offers characterized by an hours requirement and workers pay search costs to find jobs. We present evidence supporting three predictions of this model by analyzing bunching at kinks using Danish tax records. First, larger kinks generate larger taxable income elasticities. Second, kinks that apply to a larger group of workers generate larger elasticities. Third, the distribution of job offers is tailored to match workers' aggregate tax preferences in equilibrium. Our results suggest that macro elasticities may be substantially larger than the estimates obtained using standard microeconometric methods. PMID:21836746
Item Discrimination and Type I Error in the Detection of Differential Item Functioning
ERIC Educational Resources Information Center
Li, Yanju; Brooks, Gordon P.; Johanson, George A.
2012-01-01
In 2009, DeMars stated that when impact exists there will be Type I error inflation, especially with larger sample sizes and larger discrimination parameters for items. One purpose of this study is to present the patterns of Type I error rates using Mantel-Haenszel (MH) and logistic regression (LR) procedures when the mean ability between the…
Tracheid dimensions in rootwood of southern pine
Floyd G. Manwiller
1972-01-01
In samples from 20 trees aged 12 to 89 years, rootwood tracheids were one-third longer and one-third larger in diameter and had walls 18 percent thinner and lumens almost two-thirds larger than stemwood tracheids measured at stump height. Tracheids from horizontal roots were longer and had thicker walls than those from roots of other orientations; length, cell diameter...
Microscope Image of Scavenged Particles
NASA Technical Reports Server (NTRS)
2008-01-01
This image from NASA's Phoenix Mars Lander's Optical Microscope shows a strongly magnetic surface which has scavenged particles from within the microscope enclosure before a sample delivery from the lander's Robotic Arm. The particles correspond to the larger grains seen in fine orange material that makes up most of the soil at the Phoenix site. They vary in color, but are of similar size, about one-tenth of a millimeter. As the microscope's sample wheel moved during operation, these particles also shifted, clearing a thin layer of the finer orange particles that have also been collected. Together with the previous image, this shows that the larger grains are much more magnetic than the fine orange particles with a much larger volume of the grains being collected by the magnet. The image is 2 milimeters across. It is speculated that the orange material particles are a weathering product from the larger grains, with the weathering process both causing a color change and a loss of magnetism. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by JPL, Pasadena, Calif. Spacecraft development was by Lockheed Martin Space Systems, Denver.Automation of POST Cases via External Optimizer and "Artificial p2" Calculation
NASA Technical Reports Server (NTRS)
Dees, Patrick D.; Zwack, Mathew R.; Michelson, Diane K.
2017-01-01
During conceptual design speed and accuracy are often at odds. Specifically in the realm of launch vehicles, optimizing the ascent trajectory requires a larger pool of analytical power and expertise. Experienced analysts working on familiar vehicles can produce optimal trajectories in a short time frame, however whenever either "experienced" or "familiar " is not applicable the optimization process can become quite lengthy. In order to construct a vehicle agnostic method an established global optimization algorithm is needed. In this work the authors develop an "artificial" error term to map arbitrary control vectors to non-zero error by which a global method can operate. Two global methods are compared alongside Design of Experiments and random sampling and are shown to produce comparable results to analysis done by a human expert.
The genetic basis of panic and phobic anxiety disorders.
Smoller, Jordan W; Gardner-Schuster, Erica; Covino, Jennifer
2008-05-15
Panic disorder and phobic anxiety disorders are common disorders that are often chronic and disabling. Genetic epidemiologic studies have documented that these disorders are familial and moderately heritable. Linkage studies have implicated several chromosomal regions that may harbor susceptibility genes; however, candidate gene association studies have not established a role for any specific loci to date. Increasing evidence from family and genetic studies suggests that genes underlying these disorders overlap and transcend diagnostic boundaries. Heritable forms of anxious temperament, anxiety-related personality traits and neuroimaging assays of fear circuitry may represent intermediate phenotypes that predispose to panic and phobic disorders. The identification of specific susceptibility variants will likely require much larger sample sizes and the integration of insights from genetic analyses of animal models and intermediate phenotypes. Copyright 2008 Wiley-Liss, Inc.
2004-04-15
These are images of CGEL-2 samples taken during STS-95. They show binary colloidal suspensions that have formed ordered crystalline structures in microgravity. In sample 5, there are more particles therefore, many, many crystallites (small crystals) form. In sample 6, there are less particles therefore, the particles are far apart and few, much larger crystallites form. The white object in the right corner of sample 5 is the stir bar used to mix the sample at the begirning of the mission.
Hime, Paul M; Hotaling, Scott; Grewelle, Richard E; O'Neill, Eric M; Voss, S Randal; Shaffer, H Bradley; Weisrock, David W
2016-12-01
Perhaps the most important recent advance in species delimitation has been the development of model-based approaches to objectively diagnose species diversity from genetic data. Additionally, the growing accessibility of next-generation sequence data sets provides powerful insights into genome-wide patterns of divergence during speciation. However, applying complex models to large data sets is time-consuming and computationally costly, requiring careful consideration of the influence of both individual and population sampling, as well as the number and informativeness of loci on species delimitation conclusions. Here, we investigated how locus number and information content affect species delimitation results for an endangered Mexican salamander species, Ambystoma ordinarium. We compared results for an eight-locus, 137-individual data set and an 89-locus, seven-individual data set. For both data sets, we used species discovery methods to define delimitation models and species validation methods to rigorously test these hypotheses. We also used integrated demographic model selection tools to choose among delimitation models, while accounting for gene flow. Our results indicate that while cryptic lineages may be delimited with relatively few loci, sampling larger numbers of loci may be required to ensure that enough informative loci are available to accurately identify and validate shallow-scale divergences. These analyses highlight the importance of striking a balance between dense sampling of loci and individuals, particularly in shallowly diverged lineages. They also suggest the presence of a currently unrecognized, endangered species in the western part of A. ordinarium's range. © 2016 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Bradac, Marusa; Coe, Dan; Huang, Kuang-Han; Salmon, Brett; Hoag, Austin; Bradley, Larry; Ryan, Russell; Dawson, Will; Zitrin, Adi; Jones, Christine; Sharon, Keren; Trenti, Michele; Stark, Daniel; Bouwens, Rychard; Oesch, Pascal; Lam, Daniel; Carrasco Nunez, Daniela Patricia
2017-04-01
When did galaxies start forming stars? What is the role of distant galaxies in galaxy formation models and epoch of reionization? Recent observations indicate at least two critical puzzles in these studies. (1) First galaxies might have started forming stars earlier than previously thought (<400Myr after the Big Bang). (2) It is still unclear what is their star formation history and whether these galaxies can reionize the Universe. Accurate knowledge of stellar masses, ages, and star formation rates at this epoch requires measuring both rest-frame UV and optical light, which only Spitzer and HST can probe at z 6-11 for a large enough sample of typical galaxies. To address this cosmic puzzle, we propose Spitzer imaging of the fields behind 3 most powerful cosmic telescopes selected using HST, Spitzer, and Planck data from the RELICS and SRELICS programs (Reionization Lensing Cluster Survey; 41 clusters, 190 HST orbits, 390 Spitzer hours). This proposal will be a valuable Legacy complement to the existing IRAC deep surveys, and it will open up a new parameter space by probing the ordinary yet magnified population with much improved sample variance. The program will allow us to study stellar properties of a large number, 30 galaxies at z 6-11. Deep Spitzer data will be crucial to unambiguously measure their stellar properties (age, SFR, M*). Finally this proposal will establish the presence (or absence) of an unusually early established stellar population, as was recently observed in MACS1149JD at z 9. If confirmed in a larger sample, this result will require a paradigm shift in our understanding of the earliest star formation.
Sana, Dandara Emery Morais; Mayrink de Miranda, Priscila; Pitol, Bruna Caroline Vieira; Moran, Mariana Soares; Silva, Nayara Nascimento Toledo; Guerreiro da Silva, Ismael Dali Cotrim; de Cássia Stocco, Rita; Beçak, Willy; Lima, Angélica Alves; Carneiro, Cláudia Martins
2013-09-01
Herein, we evaluated cervical samples from normal tissue or HPV-infected tissue, to determine if the relative nuclear/cytoplasmic ratio (NA/CA) and the presence of nonclassical cytological criteria are a novel cytological criterion for the diagnosis of HPV. Significantly, larger NA/CA ratios were found for the HPV-ATYPIA+ and HPV+ATYPIA+ groups compared with HPV-ATYPIA- group, regardless of collection method. For the samples collected with a spatula, only three samples from the HPV-ATIPIA- group showed four or more nonclassical parameters (i.e., were positive), while a larger number of the samples in the HPV-ATYPIA+, HPV+ATYPIA-, and HPV+ATYPIA+ groups were positive (13, 4, and 13 samples, respectively). Among those collected with a brush, no sample showed four or more nonclassical criteria in the HPV-ATYPIA- group, while a number of samples were positive in the HPV-ATYPIA+, HPV+ATYPIA-, and HPV+ATYPIA+ groups (4, 3, and 4 samples, respectively). HPV infection was associated with significant morphometrical changes; no increase in the NA/CA ratio was found in the HPV+ATYPIA- samples, compared with the HPV-ATIPIA- samples collected with either a spatula or a brush. In conclusion, by including nonclassical cytological criteria into the patient diagnosis, we were able to reduce the number of false negative and false positive HPV diagnoses made using conventional cytology alone. Copyright © 2013 Wiley Periodicals, Inc.
Usherwood, James R
2013-08-23
Larger terrestrial animals tend to support their weight with more upright limbs. This makes structural sense, reducing the loading on muscles and bones, which is disproportionately challenging in larger animals. However, it does not account for why smaller animals are more crouched; instead, they could enjoy relatively more slender supporting structures or higher safety factors. Here, an alternative account for the scaling of posture is proposed, with close parallels to the scaling of jump performance. If the costs of locomotion are related to the volume of active muscle, and the active muscle volume required depends on both the work and the power demanded during the push-off phase of each step (not just the net positive work), then the disproportional scaling of requirements for work and push-off power are revealing. Larger animals require relatively greater active muscle volumes for dynamically similar gaits (e.g. top walking speed)-which may present an ultimate constraint to the size of running animals. Further, just as for jumping, animals with shorter legs and briefer push-off periods are challenged to provide the power (not the work) required for push-off. This can be ameliorated by having relatively long push-off periods, potentially accounting for the crouched stance of small animals.
Usherwood, James R.
2013-01-01
Larger terrestrial animals tend to support their weight with more upright limbs. This makes structural sense, reducing the loading on muscles and bones, which is disproportionately challenging in larger animals. However, it does not account for why smaller animals are more crouched; instead, they could enjoy relatively more slender supporting structures or higher safety factors. Here, an alternative account for the scaling of posture is proposed, with close parallels to the scaling of jump performance. If the costs of locomotion are related to the volume of active muscle, and the active muscle volume required depends on both the work and the power demanded during the push-off phase of each step (not just the net positive work), then the disproportional scaling of requirements for work and push-off power are revealing. Larger animals require relatively greater active muscle volumes for dynamically similar gaits (e.g. top walking speed)—which may present an ultimate constraint to the size of running animals. Further, just as for jumping, animals with shorter legs and briefer push-off periods are challenged to provide the power (not the work) required for push-off. This can be ameliorated by having relatively long push-off periods, potentially accounting for the crouched stance of small animals. PMID:23825086
Perception of Weight Status in U.S. Children and Adolescents Aged 8-15 Years, 2005-2012
... Larger FIPRs indicate greater income. Data source and methods Data from NHANES were used for these analyses. ... percentages were estimated using Taylor series linearization, a method that incorporates the sample weights and sample design. ...
Sample-Collection Drill Hole on Martian Sandstone Target Windjana
2014-05-06
This image from the Navigation Camera Navcam on NASA Curiosity Mars rover shows two holes at top center drilled into a sandstone target called Windjana. The farther hole, with larger pile of tailings around it, is a full-depth sampling hole.
Enhanced cooling of Yb:YLF using astigmatic Herriott cell (Conference Presentation)
NASA Astrophysics Data System (ADS)
Gragossian, Aram; Meng, Junwei; Ghasemkhani, Mohammadreza; Albrecht, Alexander R.; Tonelli, Mauro; Sheik-Bahae, Mansoor
2017-02-01
Optical refrigeration of solids requires crystals with exceptional qualities. Crystals with external quantum efficiencies (EQE) larger than 99% and background absorptions of 4×10-4cm-1 have been cooled to cryogenic temperatures using non resonant cavities. Estimating the cooling efficiency requires accurate measurements of the above mentioned quantities. Here we discuss measurements of EQE and background absorption for two high quality Yb:YLF samples. For any given sample, to reach minimum achievable temperatures heat generated by fluorescence must be removed from the surrounding clamshell and more importantly, absorption of the laser light must be maximized. Since the absorption coefficient drops at lower temperatures the only option is to confine laser light in a cavity until almost 100% of the light is absorbed. This can be achieved by placing the crystal between a cylindrical and spherical mirror to form an astigmatic Herriott cell. In this geometry light enters through a hole in the middle of the spherical mirror and if the entrance angle is correct, it can make as many round trips as required to absorb all the light. At 120 K 60 passes and 150 passes at 100K ensures more than 95% absorption of the laser light. 5 and 10% Yb:YLF crystals placed in such a cell cool to sub 90K temperatures. Non-contact temperature measurements are more challenging for such a geometry. Reabsorption of fluorescence for each pass must be taken into account for accurate temperature measurements by differential luminescence thermometry (DLT). Alternatively, we used part of the spectrum that is not affected by reabsorption.
Feil, A; Thoden van Velzen, E U; Jansen, M; Vitz, P; Go, N; Pretz, T
2016-02-01
The recovery of beverage cartons (BC) in three lightweight packaging waste processing plants (LP) was analyzed with different input materials and input masses in the area of 21-50Mg. The data was generated by gravimetric determination of the sorting products, sampling and sorting analysis. Since the particle size of beverage cartons is larger than 120mm, a modified sampling plan was implemented and targeted multiple sampling (3-11 individual samplings) and a total sample size of respectively 1200l (ca. 60kg) for the BC-products and of about 2400l (ca. 120kg) for material-heterogeneous mixed plastics (MP) and sorting residue products. The results infer that the quantification of the beverage carton yield in the process, i.e., by including all product-containing material streams, can be specified only with considerable fluctuation ranges. Consequently, the total assessment, regarding all product streams, is rather qualitative than quantitative. Irregular operation conditions as well as unfavorable sampling conditions and capacity overloads are likely causes for high confidence intervals. From the results of the current study, recommendations can basically be derived for a better sampling in LP-processing plants. Despite of the suboptimal statistical results, the results indicate very clear that the plants show definite optimisation potentials with regard to the yield of beverage cartons as well as the required product purity. Due to the test character of the sorting trials the plant parameterization was not ideal for this sorting task and consequently the results should be interpreted with care. Copyright © 2015 Elsevier Ltd. All rights reserved.
Evaluation of sampling methods for toxicological testing of indoor air particulate matter.
Tirkkonen, Jenni; Täubel, Martin; Hirvonen, Maija-Riitta; Leppänen, Hanna; Lindsley, William G; Chen, Bean T; Hyvärinen, Anne; Huttunen, Kati
2016-09-01
There is a need for toxicity tests capable of recognizing indoor environments with compromised air quality, especially in the context of moisture damage. One of the key issues is sampling, which should both provide meaningful material for analyses and fulfill requirements imposed by practitioners using toxicity tests for health risk assessment. We aimed to evaluate different existing methods of sampling indoor particulate matter (PM) to develop a suitable sampling strategy for a toxicological assay. During three sampling campaigns in moisture-damaged and non-damaged school buildings, we evaluated one passive and three active sampling methods: the Settled Dust Box (SDB), the Button Aerosol Sampler, the Harvard Impactor and the National Institute for Occupational Safety and Health (NIOSH) Bioaerosol Cyclone Sampler. Mouse RAW264.7 macrophages were exposed to particle suspensions and cell metabolic activity (CMA), production of nitric oxide (NO) and tumor necrosis factor (TNFα) were determined after 24 h of exposure. The repeatability of the toxicological analyses was very good for all tested sampler types. Variability within the schools was found to be high especially between different classrooms in the moisture-damaged school. Passively collected settled dust and PM collected actively with the NIOSH Sampler (Stage 1) caused a clear response in exposed cells. The results suggested the higher relative immunotoxicological activity of dust from the moisture-damaged school. The NIOSH Sampler is a promising candidate for the collection of size-fractionated PM to be used in toxicity testing. The applicability of such sampling strategy in grading moisture damage severity in buildings needs to be developed further in a larger cohort of buildings.
In-situ monitoring of flow-permeable surface area of high explosive powder using small sample masses
Maiti, Amitesh; Han, Yong; Zaka, Fowzia; ...
2015-02-17
To ensure good performance of high explosive devices over long periods of time, initiating powders need to maintain their specific surface area within allowed margins during the entire duration of deployment. A common diagnostic used in this context is the Fisher sub-sieve surface area (FSSA). Furthermore, commercial permeametry instruments measuring the FSSA requires the utilization of a sample mass equal to the crystal density of the sample material, an amount that is often one or two orders of magnitude larger than the typical masses found in standard detonator applications. Here we develop a customization of the standard device that canmore » utilize just tens of milligram samples, and with simple calibration yield FSSA values at ac curacy levels comparable to the standard apparatus. This necessitated a newly designed sample holder, made from a material of low coefficient of thermal expansion, which is conveniently transferred between an aging chamber and a re-designed permeametry tube. This improves the fidelity of accelerated aging studies by allowing measurement on the same physical sample at various time - instants during the aging process, and by obviating the need for a potentially FSSA-altering powder re-compaction step. We used the customized apparatus to monitor the FSSA evolution of a number of undoped and homolog-doped PETN powder samples that were subjected to artificial aging for several months at elevated temperatures. These results, in conjunction with an Arrhenius-based aging model were used to assess powder-coarsening - rates under long-term storage.« less
In Situ Balloon-Borne Ice Particle Imaging in High-Latitude Cirrus
NASA Astrophysics Data System (ADS)
Kuhn, Thomas; Heymsfield, Andrew J.
2016-09-01
Cirrus clouds reflect incoming solar radiation, creating a cooling effect. At the same time, these clouds absorb the infrared radiation from the Earth, creating a greenhouse effect. The net effect, crucial for radiative transfer, depends on the cirrus microphysical properties, such as particle size distributions and particle shapes. Knowledge of these cloud properties is also needed for calibrating and validating passive and active remote sensors. Ice particles of sizes below 100 µm are inherently difficult to measure with aircraft-mounted probes due to issues with resolution, sizing, and size-dependent sampling volume. Furthermore, artefacts are produced by shattering of particles on the leading surfaces of the aircraft probes when particles several hundred microns or larger are present. Here, we report on a series of balloon-borne in situ measurements that were carried out at a high-latitude location, Kiruna in northern Sweden (68N 21E). The method used here avoids these issues experienced with the aircraft probes. Furthermore, with a balloon-borne instrument, data are collected as vertical profiles, more useful for calibrating or evaluating remote sensing measurements than data collected along horizontal traverses. Particles are collected on an oil-coated film at a sampling speed given directly by the ascending rate of the balloon, 4 m s-1. The collecting film is advanced uniformly inside the instrument so that an always unused section of the film is exposed to ice particles, which are measured by imaging shortly after sampling. The high optical resolution of about 4 µm together with a pixel resolution of 1.65 µm allows particle detection at sizes of 10 µm and larger. For particles that are 20 µm (12 pixel) in size or larger, the shape can be recognized. The sampling volume, 130 cm3 s-1, is well defined and independent of particle size. With the encountered number concentrations of between 4 and 400 L-1, this required about 90- to 4-s sampling times to determine particle size distributions of cloud layers. Depending on how ice particles vary through the cloud, several layers per cloud with relatively uniform properties have been analysed. Preliminary results of the balloon campaign, targeting upper tropospheric, cold cirrus clouds, are presented here. Ice particles in these clouds were predominantly very small, with a median size of measured particles of around 50 µm and about 80 % of all particles below 100 µm in size. The properties of the particle size distributions at temperatures between -36 and -67 °C have been studied, as well as particle areas, extinction coefficients, and their shapes (area ratios). Gamma and log-normal distribution functions could be fitted to all measured particle size distributions achieving very good correlation with coefficients R of up to 0.95. Each distribution features one distinct mode. With decreasing temperature, the mode diameter decreases exponentially, whereas the total number concentration increases by two orders of magnitude with decreasing temperature in the same range. The high concentrations at cold temperatures also caused larger extinction coefficients, directly determined from cross-sectional areas of single ice particles, than at warmer temperatures. The mass of particles has been estimated from area and size. Ice water content (IWC) and effective diameters are then determined from the data. IWC did vary only between 1 × 10-3 and 5 × 10-3 g m-3 at temperatures below -40 °C and did not show a clear temperature trend. These measurements are part of an ongoing study.
Minimum and Maximum Times Required to Obtain Representative Suspended Sediment Samples
NASA Astrophysics Data System (ADS)
Gitto, A.; Venditti, J. G.; Kostaschuk, R.; Church, M. A.
2014-12-01
Bottle sampling is a convenient method of obtaining suspended sediment measurements for the development of sediment budgets. While these methods are generally considered to be reliable, recent analysis of depth-integrated sampling has identified considerable uncertainty in measurements of grain-size concentration between grain-size classes of multiple samples. Point-integrated bottle sampling is assumed to represent the mean concentration of suspended sediment but the uncertainty surrounding this method is not well understood. Here we examine at-a-point variability in velocity, suspended sediment concentration, grain-size distribution, and grain-size moments to determine if traditional point-integrated methods provide a representative sample of suspended sediment. We present continuous hour-long observations of suspended sediment from the sand-bedded portion of the Fraser River at Mission, British Columbia, Canada, using a LISST laser-diffraction instrument. Spectral analysis suggests that there are no statistically significant peak in energy density, suggesting the absence of periodic fluctuations in flow and suspended sediment. However, a slope break in the spectra at 0.003 Hz corresponds to a period of 5.5 minutes. This coincides with the threshold between large-scale turbulent eddies that scale with channel width/mean velocity and hydraulic phenomena related to channel dynamics. This suggests that suspended sediment samples taken over a period longer than 5.5 minutes incorporate variability that is larger scale than turbulent phenomena in this channel. Examination of 5.5-minute periods of our time series indicate that ~20% of the time a stable mean value of volumetric concentration is reached within 30 seconds, a typical bottle sample duration. In ~12% of measurements a stable mean was not reached over the 5.5 minute sample duration. The remaining measurements achieve a stable mean in an even distribution over the intervening interval.
Cruz-Roa, Angel; Gilmore, Hannah; Basavanhally, Ajay; Feldman, Michael; Ganesan, Shridar; Shih, Natalie; Tomaszewski, John; Madabhushi, Anant; González, Fabio
2018-01-01
Precise detection of invasive cancer on whole-slide images (WSI) is a critical first step in digital pathology tasks of diagnosis and grading. Convolutional neural network (CNN) is the most popular representation learning method for computer vision tasks, which have been successfully applied in digital pathology, including tumor and mitosis detection. However, CNNs are typically only tenable with relatively small image sizes (200 × 200 pixels). Only recently, Fully convolutional networks (FCN) are able to deal with larger image sizes (500 × 500 pixels) for semantic segmentation. Hence, the direct application of CNNs to WSI is not computationally feasible because for a WSI, a CNN would require billions or trillions of parameters. To alleviate this issue, this paper presents a novel method, High-throughput Adaptive Sampling for whole-slide Histopathology Image analysis (HASHI), which involves: i) a new efficient adaptive sampling method based on probability gradient and quasi-Monte Carlo sampling, and, ii) a powerful representation learning classifier based on CNNs. We applied HASHI to automated detection of invasive breast cancer on WSI. HASHI was trained and validated using three different data cohorts involving near 500 cases and then independently tested on 195 studies from The Cancer Genome Atlas. The results show that (1) the adaptive sampling method is an effective strategy to deal with WSI without compromising prediction accuracy by obtaining comparative results of a dense sampling (∼6 million of samples in 24 hours) with far fewer samples (∼2,000 samples in 1 minute), and (2) on an independent test dataset, HASHI is effective and robust to data from multiple sites, scanners, and platforms, achieving an average Dice coefficient of 76%.
Gilmore, Hannah; Basavanhally, Ajay; Feldman, Michael; Ganesan, Shridar; Shih, Natalie; Tomaszewski, John; Madabhushi, Anant; González, Fabio
2018-01-01
Precise detection of invasive cancer on whole-slide images (WSI) is a critical first step in digital pathology tasks of diagnosis and grading. Convolutional neural network (CNN) is the most popular representation learning method for computer vision tasks, which have been successfully applied in digital pathology, including tumor and mitosis detection. However, CNNs are typically only tenable with relatively small image sizes (200 × 200 pixels). Only recently, Fully convolutional networks (FCN) are able to deal with larger image sizes (500 × 500 pixels) for semantic segmentation. Hence, the direct application of CNNs to WSI is not computationally feasible because for a WSI, a CNN would require billions or trillions of parameters. To alleviate this issue, this paper presents a novel method, High-throughput Adaptive Sampling for whole-slide Histopathology Image analysis (HASHI), which involves: i) a new efficient adaptive sampling method based on probability gradient and quasi-Monte Carlo sampling, and, ii) a powerful representation learning classifier based on CNNs. We applied HASHI to automated detection of invasive breast cancer on WSI. HASHI was trained and validated using three different data cohorts involving near 500 cases and then independently tested on 195 studies from The Cancer Genome Atlas. The results show that (1) the adaptive sampling method is an effective strategy to deal with WSI without compromising prediction accuracy by obtaining comparative results of a dense sampling (∼6 million of samples in 24 hours) with far fewer samples (∼2,000 samples in 1 minute), and (2) on an independent test dataset, HASHI is effective and robust to data from multiple sites, scanners, and platforms, achieving an average Dice coefficient of 76%. PMID:29795581
Advanced Code-Division Multiplexers for Superconducting Detector Arrays
NASA Astrophysics Data System (ADS)
Irwin, K. D.; Cho, H. M.; Doriese, W. B.; Fowler, J. W.; Hilton, G. C.; Niemack, M. D.; Reintsema, C. D.; Schmidt, D. R.; Ullom, J. N.; Vale, L. R.
2012-06-01
Multiplexers based on the modulation of superconducting quantum interference devices are now regularly used in multi-kilopixel arrays of superconducting detectors for astrophysics, cosmology, and materials analysis. Over the next decade, much larger arrays will be needed. These larger arrays require new modulation techniques and compact multiplexer elements that fit within each pixel. We present a new in-focal-plane code-division multiplexer that provides multiplexing elements with the required scalability. This code-division multiplexer uses compact lithographic modulation elements that simultaneously multiplex both signal outputs and superconducting transition-edge sensor (TES) detector bias voltages. It eliminates the shunt resistor used to voltage bias TES detectors, greatly reduces power dissipation, allows different dc bias voltages for each TES, and makes all elements sufficiently compact to fit inside the detector pixel area. These in-focal plane code-division multiplexers can be combined with multi-GHz readout based on superconducting microresonators to scale to even larger arrays.
Understanding the Lunar System Architecture Design Space
NASA Technical Reports Server (NTRS)
Arney, Dale C.; Wilhite, Alan W.; Reeves, David M.
2013-01-01
Based on the flexible path strategy and the desire of the international community, the lunar surface remains a destination for future human exploration. This paper explores options within the lunar system architecture design space, identifying performance requirements placed on the propulsive system that performs Earth departure within that architecture based on existing and/or near-term capabilities. The lander crew module and ascent stage propellant mass fraction are primary drivers for feasibility in multiple lander configurations. As the aggregation location moves further out of the lunar gravity well, the lunar lander is required to perform larger burns, increasing the sensitivity to these two factors. Adding an orbit transfer stage to a two-stage lunar lander and using a large storable stage for braking with a one-stage lunar lander enable higher aggregation locations than Low Lunar Orbit. Finally, while using larger vehicles enables a larger feasible design space, there are still feasible scenarios that use three launches of smaller vehicles.
Jha, Abhinav K.; Mena, Esther; Caffo, Brian; Ashrafinia, Saeed; Rahmim, Arman; Frey, Eric; Subramaniam, Rathan M.
2017-01-01
Abstract. Recently, a class of no-gold-standard (NGS) techniques have been proposed to evaluate quantitative imaging methods using patient data. These techniques provide figures of merit (FoMs) quantifying the precision of the estimated quantitative value without requiring repeated measurements and without requiring a gold standard. However, applying these techniques to patient data presents several practical difficulties including assessing the underlying assumptions, accounting for patient-sampling-related uncertainty, and assessing the reliability of the estimated FoMs. To address these issues, we propose statistical tests that provide confidence in the underlying assumptions and in the reliability of the estimated FoMs. Furthermore, the NGS technique is integrated within a bootstrap-based methodology to account for patient-sampling-related uncertainty. The developed NGS framework was applied to evaluate four methods for segmenting lesions from F-Fluoro-2-deoxyglucose positron emission tomography images of patients with head-and-neck cancer on the task of precisely measuring the metabolic tumor volume. The NGS technique consistently predicted the same segmentation method as the most precise method. The proposed framework provided confidence in these results, even when gold-standard data were not available. The bootstrap-based methodology indicated improved performance of the NGS technique with larger numbers of patient studies, as was expected, and yielded consistent results as long as data from more than 80 lesions were available for the analysis. PMID:28331883
Vibbert, Hunter B; Ku, Seockmo; Li, Xuan; Liu, Xingya; Ximenes, Eduardo; Kreke, Thomas; Ladisch, Michael R; Deering, Amanda J; Gehring, Andrew G
2015-01-01
Microfiltration of chicken extracts has the potential to significantly decrease the time required to detect Salmonella, as long as the extract can be efficiently filtered and the pathogenic microorganisms kept in a viable state during this process. We present conditions that enable microfiltration by adding endopeptidase from Bacillus amyloliquefaciens to chicken extracts or chicken rinse, prior to microfiltration with fluid flow on both retentate and permeate sides of 0.2 μm cutoff polysulfone and polyethersulfone hollow fiber membranes. After treatment with this protease, the distribution of micron, submicron, and nanometer particles in chicken extracts changes so that the size of the remaining particles corresponds to 0.4-1 μm. Together with alteration of dissolved proteins, this change helps to explain how membrane fouling might be minimized because the potential foulants are significantly smaller or larger than the membrane pore size. At the same time, we found that the presence of protein protects Salmonella from protease action, thus maintaining cell viability. Concentration and recovery of 1-10 CFU Salmonella/mL from 400 mL chicken rinse is possible in less than 4 h, with the microfiltration step requiring less than 25 min at fluxes of 0.028-0.32 mL/cm(2) min. The entire procedure-from sample processing to detection by polymerase chain reaction-is completed in 8 h. © 2015 American Institute of Chemical Engineers.
He, Zhili; Feng, Gang; Yang, Bin; Yang, Lijiang; Liu, Cheng-Wen; Xu, Hong-Guang; Xu, Xi-Ling; Zheng, Wei-Jun; Gao, Yi Qin
2018-06-14
To understand the initial hydration processes of CaCl 2 , we performed molecular simulations employing the force field based on the theory of electronic continuum correction with rescaling. Integrated tempering sampling molecular dynamics were combined with ab initio calculations to overcome the sampling challenge in cluster structure search and refinement. The calculated vertical detachment energies of CaCl 2 (H 2 O) n - (n = 0-8) were compared with the values obtained from photoelectron spectra, and consistency was found between the experiment and computation. Separation of the Cl-Ca ion pair is investigated in CaCl 2 (H 2 O) n - anions, where the first Ca-Cl ionic bond required 4 water molecules, and both Ca-Cl bonds are broken when the number of water molecules is larger than 7. For neutral CaCl 2 (H 2 O) n clusters, breaking of the first Ca-Cl bond starts at n = 5, and 8 water molecules are not enough to separate the two ion pairs. Comparing with the observations on magnesium chloride, it shows that separating one ion pair in CaCl 2 (H 2 O) n requires fewer water molecules than those for MgCl 2 (H 2 O) n . Coincidentally, the solubility of calcium chloride is higher than that of magnesium chloride in bulk solutions.
Speed of response in ultrabrief and brief pulse width right unilateral ECT.
Loo, Colleen K; Garfield, Joshua B B; Katalinic, Natalie; Schweitzer, Isaac; Hadzi-Pavlovic, Dusan
2013-05-01
Ultrabrief pulse width stimulation electroconvulsive therapy (ECT) results in less cognitive side-effects than brief pulse ECT, but recent work suggests that more treatment sessions may be required to achieve similar efficacy. In this retrospective analysis of subjects pooled from three research studies, time to improvement was analysed in 150 depressed subjects who received right unilateral ECT with a brief pulse width (at five times seizure threshold) or ultrabrief pulse width (at six times seizure threshold). Multivariate Cox regression analyses compared the number of treatments required for 50% reduction in depression scores (i.e. speed of response) in these two samples. The analyses controlled for clinical, demographic and treatment variables that differed between the samples or that were found to be significant predictors of speed of response in univariate analyses. In the multivariate analysis, older age predicted faster speed of response. There was a non-significant trend for faster time to 50% improvement with brief pulse ECT (p = 0.067). Remission rates were higher after brief pulse ECT than ultrabrief pulse ECT (p = 0.007) but response rates were similar. This study, the largest of its kind reported to date, suggests that fewer treatments may be needed to attain response with brief than ultrabrief pulse ECT and that remission rates are higher with brief pulse ECT. Further research with a larger randomized and blinded study is recommended.
NASA Astrophysics Data System (ADS)
He, Zhili; Feng, Gang; Yang, Bin; Yang, Lijiang; Liu, Cheng-Wen; Xu, Hong-Guang; Xu, Xi-Ling; Zheng, Wei-Jun; Gao, Yi Qin
2018-06-01
To understand the initial hydration processes of CaCl2, we performed molecular simulations employing the force field based on the theory of electronic continuum correction with rescaling. Integrated tempering sampling molecular dynamics were combined with ab initio calculations to overcome the sampling challenge in cluster structure search and refinement. The calculated vertical detachment energies of CaCl2(H2O)n- (n = 0-8) were compared with the values obtained from photoelectron spectra, and consistency was found between the experiment and computation. Separation of the Cl—Ca ion pair is investigated in CaCl2(H2O)n- anions, where the first Ca—Cl ionic bond required 4 water molecules, and both Ca—Cl bonds are broken when the number of water molecules is larger than 7. For neutral CaCl2(H2O)n clusters, breaking of the first Ca—Cl bond starts at n = 5, and 8 water molecules are not enough to separate the two ion pairs. Comparing with the observations on magnesium chloride, it shows that separating one ion pair in CaCl2(H2O)n requires fewer water molecules than those for MgCl2(H2O)n. Coincidentally, the solubility of calcium chloride is higher than that of magnesium chloride in bulk solutions.
Evaluation of response variables in computer-simulated virtual cataract surgery
NASA Astrophysics Data System (ADS)
Söderberg, Per G.; Laurell, Carl-Gustaf; Simawi, Wamidh; Nordqvist, Per; Skarman, Eva; Nordh, Leif
2006-02-01
We have developed a virtual reality (VR) simulator for phacoemulsification (phaco) surgery. The current work aimed at evaluating the precision in the estimation of response variables identified for measurement of the performance of VR phaco surgery. We identified 31 response variables measuring; the overall procedure, the foot pedal technique, the phacoemulsification technique, erroneous manipulation, and damage to ocular structures. Totally, 8 medical or optometry students with a good knowledge of ocular anatomy and physiology but naive to cataract surgery performed three sessions each of VR Phaco surgery. For measurement, the surgical procedure was divided into a sculpting phase and an evacuation phase. The 31 response variables were measured for each phase in all three sessions. The variance components for individuals and iterations of sessions within individuals were estimated with an analysis of variance assuming a hierarchal model. The consequences of estimated variabilities for sample size requirements were determined. It was found that generally there was more variability for iterated sessions within individuals for measurements of the sculpting phase than for measurements of the evacuation phase. This resulted in larger required sample sizes for detection of difference between independent groups or change within group, for the sculpting phase as compared to for the evacuation phase. It is concluded that several of the identified response variables can be measured with sufficient precision for evaluation of VR phaco surgery.
Rapid isolation of blood plasma using a cascaded inertial microfluidic device
Robinson, M.; Hinsdale, T.; Coté, G.
2017-01-01
Blood, saliva, mucus, sweat, sputum, and other biological fluids are often hindered in their ability to be used in point-of-care (POC) diagnostics because their assays require some form of off-site sample pre-preparation to effectively separate biomarkers from larger components such as cells. The rapid isolation, identification, and quantification of proteins and other small molecules circulating in the blood plasma from larger interfering molecules are therefore particularly important factors for optical blood diagnostic tests, in particular, when using optical approaches that incur spectroscopic interference from hemoglobin-rich red blood cells (RBCs). In this work, a sequential spiral polydimethylsiloxane (PDMS) microfluidic device for rapid (∼1 min) on-chip blood cell separation is presented. The chip utilizes Dean-force induced migration via two 5-loop Archimedean spirals in series. The chip was characterized in its ability to filter solutions containing fluorescent beads and silver nanoparticles and further using blood solutions doped with a fluorescent protein. Through these experiments, both cellular and small molecule behaviors in the chip were assessed. The results exhibit an average RBC separation efficiency of ∼99% at a rate of 5.2 × 106 cells per second while retaining 95% of plasma components. This chip is uniquely suited for integration within a larger point-of-care diagnostic system for the testing of blood plasma, and the use of multiple filtering spirals allows for the tuning of filtering steps, making this device and the underlying technique applicable for a wide range of separation applications. PMID:28405258
Thin-plate spline analysis of allometry and sexual dimorphism in the human craniofacial complex.
Rosas, Antonio; Bastir, Markus
2002-03-01
The relationship between allometry and sexual dimorphism in the human craniofacial complex was analyzed using geometric morphometric methods. Thin-plate splines (TPS) analysis has been applied to investigate the lateral profile of complete adult skulls of known sex. Twenty-nine three-dimensional (3D) craniofacial and mandibular landmark coordinates were recorded from a sample of 52 adult females and 52 adult males of known age and sex. No difference in the influence of size on shape was detected between sexes. Both size and sex had significant influences on shape. As expected, the influence of centroid size on shape (allometry) revealed a shift in the proportions of the neurocranium and the viscerocranium, with a marked allometric variation of the lower face. Adjusted for centroid size, males presented a relatively larger size of the nasopharyngeal space than females. A mean-male TPS transformation revealed a larger piriform aperture, achieved by an increase of the angulation of the nasal bones and a downward rotation of the anterior nasal floor. Male pharynx expansion was also reflected by larger choanae and a more posteriorly inclined basilar part of the occipital clivus. Male muscle attachment sites appeared more pronounced. In contrast, the mean-female TPS transformation was characterized by a relatively small nasal aperture. The occipital clivus inclined anteriorly, and muscle insertion areas became smoothed. Besides these variations, both maxillary and mandibular alveolar regions became prognathic. The sex-specific TPS deformation patterns are hypothesized to be associated with sexual differences in body composition and energetic requirements. Copyright 2002 Wiley-Liss, Inc.
Modeling of the "PLAN DA MATTUN" Archaeological Site Using a Combination of Different Sensors
NASA Astrophysics Data System (ADS)
Novák, D.; Tokarczyk, P.; Theiler, P. W.
2012-07-01
Plan da Mattun is located at ~2200 metre above sea level in the Tasna valley in alpine south-eastern Switzerland. In this remote location, finds dating back to the time of Ötzi (3000 B.C.) were discovered by archaeologists from the University of Zurich. For detailed investigations of the site as well as for documentation and visualization purposes the archaeologists were interested in digital models of the terrain and of certain boulders. In the presented project a digital terrain model of the rock stream located at the beginning of the valley was created, as well as detailed models of four larger boulders. These boulders average to 15 metre in height and width. The roughness of terrain makes it difficult to access certain areas and requires using multiple surveying techniques in order to cover all objects of interest. Therefore the digital terrain model was acquired using a combination of terrestrial laser scanning (TLS) and photogrammetric recording from an unmanned aerial vehicle (UAV). The larger boulders were reconstructed with a combination of TLS, terrestrial and UAV-based photogrammetry. With this approach it was possible to acquire a highaccuracy dataset over an area of 0.12 km2 under difficult conditions. The dataset includes a digital terrain model with a ground sampling distance of 10 cm and a relative accuracy of 2 cm in moderately sloped terrain. The larger boulders feature a resolution of 1 cm and a relative accuracy of 0.5 cm. The 3D data is to be used both for archaeological visualization purposes and for geological analysis of the rock stream.
Li, Chunjia; Jackson, Phillip; Lu, Xin; Xu, Chaohua; Cai, Qing; Basnayake, Jayapathi; Lakshmanan, Prakash; Ghannoum, Oula; Fan, Yuanhong
2017-04-01
Sugarcane, derived from the hybridization of Saccharum officinarum×Saccharum spontaneum, is a vegetative crop in which the final yield is highly driven by culm biomass production. Cane yield under irrigated or rain-fed conditions could be improved by developing genotypes with leaves that have high intrinsic transpiration efficiency, TEi (CO2 assimilation/stomatal conductance), provided this is not offset by negative impacts from reduced conductance and growth rates. This study was conducted to partition genotypic variation in TEi among a sample of diverse clones from the Chinese collection of sugarcane-related germplasm into that due to variation in stomatal conductance versus that due to variation in photosynthetic capacity. A secondary goal was to define protocols for optimized larger-scale screening of germplasm collections. Genotypic variation in TEi was attributed to significant variation in both stomatal and photosynthetic components. A number of genotypes were found to possess high TEi as a result of high photosynthetic capacity. This trait combination is expected to be of significant breeding value. It was determined that a small number of observations (16) is sufficient for efficiently screening TEi in larger populations of sugarcane genotypes The research methodology and results reported are encouraging in supporting a larger-scale screening and introgression of high transpiration efficiency in sugarcane breeding. However, further research is required to quantify narrow sense heritability as well as the leaf-to-field translational potential of genotypic variation in transpiration efficiency-related traits observed in this study. © The Author 2017. Published by Oxford University Press on behalf of the Society for Experimental Biology.
Lachin, John M.; McGee, Paula L.; Greenbaum, Carla J.; Palmer, Jerry; Gottlieb, Peter; Skyler, Jay
2011-01-01
Preservation of -cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(), log(+1) and square-root transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8–12 years of age, adolescents (13–17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13–17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(+1) and transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately evaluate the sample size for studies of new agents to preserve C-peptide levels in newly diagnosed type 1 diabetes. PMID:22102862
Lachin, John M; McGee, Paula L; Greenbaum, Carla J; Palmer, Jerry; Pescovitz, Mark D; Gottlieb, Peter; Skyler, Jay
2011-01-01
Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(x), log(x+1) and square-root (√x) transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1) and √x transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately evaluate the sample size for studies of new agents to preserve C-peptide levels in newly diagnosed type 1 diabetes.
Finkel, Eli J; Eastwick, Paul W; Reis, Harry T
2017-08-01
Finkel, Eastwick, and Reis (2015; FER2015) argued that psychological science is better served by responding to apprehensions about replicability rates with contextualized solutions than with one-size-fits-all solutions. Here, we extend FER2015's analysis to suggest that much of the discussion of best research practices since 2011 has focused on a single feature of high-quality science-replicability-with insufficient sensitivity to the implications of recommended practices for other features, like discovery, internal validity, external validity, construct validity, consequentiality, and cumulativeness. Thus, although recommendations for bolstering replicability have been innovative, compelling, and abundant, it is difficult to evaluate their impact on our science as a whole, especially because many research practices that are beneficial for some features of scientific quality are harmful for others. For example, FER2015 argued that bigger samples are generally better, but also noted that very large samples ("those larger than required for effect sizes to stabilize"; p. 291) could have the downside of commandeering resources that would have been better invested in other studies. In their critique of FER2015, LeBel, Campbell, and Loving (2016) concluded, based on simulated data, that ever-larger samples are better for the efficiency of scientific discovery (i.e., that there are no tradeoffs). As demonstrated here, however, this conclusion holds only when the replicator's resources are considered in isolation. If we widen the assumptions to include the original researcher's resources as well, which is necessary if the goal is to consider resource investment for the field as a whole, the conclusion changes radically-and strongly supports a tradeoff-based analysis. In general, as psychologists seek to strengthen our science, we must complement our much-needed work on increasing replicability with careful attention to the other features of a high-quality science. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
2011-01-01
Background The relationship between urbanicity and adolescent health is a critical issue for which little empirical evidence has been reported. Although an association has been suggested, a dichotomous rural versus urban comparison may not succeed in identifying differences between adolescent contexts. This study aims to assess the influence of locality size on risk behaviors in a national sample of young Mexicans living in low-income households, while considering the moderating effect of socioeconomic status (SES). Methods This is a secondary analysis of three national surveys of low-income households in Mexico in different settings: rural, semi-urban and urban areas. We analyzed risk behaviors in 15-21-year-olds and their potential relation to urbanicity. The risk behaviors explored were: tobacco and alcohol consumption, sexual initiation and condom use. The adolescents' localities of residence were classified according to the number of inhabitants in each locality. We used a logistical model to identify an association between locality size and risk behaviors, including an interaction term with SES. Results The final sample included 17,974 adolescents from 704 localities in Mexico. Locality size was associated with tobacco and alcohol consumption, showing a similar effect throughout all SES levels: the larger the size of the locality, the lower the risk of consuming tobacco or alcohol compared with rural settings. The effect of locality size on sexual behavior was more complex. The odds of adolescent condom use were higher in larger localities only among adolescents in the lowest SES levels. We found no statically significant association between locality size and sexual initiation. Conclusions The results suggest that in this sample of adolescents from low-income areas in Mexico, risk behaviors are related to locality size (number of inhabitants). Furthermore, for condom use, this relation is moderated by SES. Such heterogeneity suggests the need for more detailed analyses of both the effects of urbanicity on behavior, and the responses--which are also heterogeneous--required to address this situation. PMID:22129110
Tracey, Amanda J; Aarssen, Lonnie W
2014-01-01
The selection consequences of competition in plants have been traditionally interpreted based on a “size-advantage” hypothesis – that is, under intense crowding/competition from neighbors, natural selection generally favors capacity for a relatively large plant body size. However, this conflicts with abundant data, showing that resident species body size distributions are usually strongly right-skewed at virtually all scales within vegetation. Using surveys within sample plots and a neighbor-removal experiment, we tested: (1) whether resident species that have a larger maximum potential body size (MAX) generally have more successful local individual recruitment, and thus greater local abundance/density (as predicted by the traditional size-advantage hypothesis); and (2) whether there is a general between-species trade-off relationship between MAX and capacity to produce offspring when body size is severely suppressed by crowding/competition – that is, whether resident species with a larger MAX generally also need to reach a larger minimum reproductive threshold size (MIN) before they can reproduce at all. The results showed that MIN had a positive relationship with MAX across resident species, and local density – as well as local density of just reproductive individuals – was generally greater for species with smaller MIN (and hence smaller MAX). In addition, the cleared neighborhoods of larger target species (which had relatively large MIN) generally had – in the following growing season – a lower ratio of conspecific recruitment within these neighborhoods relative to recruitment of other (i.e., smaller) species (which had generally smaller MIN). These data are consistent with an alternative hypothesis based on a ‘reproductive-economy-advantage’ – that is, superior fitness under competition in plants generally requires not larger potential body size, but rather superior capacity to recruit offspring that are in turn capable of producing grand-offspring – and hence transmitting genes to future generations – despite intense and persistent (cross-generational) crowding/competition from near neighbors. Selection for the latter is expected to favor relatively small minimum reproductive threshold size and hence – as a tradeoff – relatively small (not large) potential body size. PMID:24772274
46 CFR 120.340 - Cable and wiring requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... for the circuit in which they are used; (2) Be installed in a manner to avoid or reduce interference... paragraph (b)(8) of this section. (c) Conductors in power and lighting circuits must be No. 14 American Wire Gauge (AWG) or larger. Conductors in control and indicator circuits must be No. 22 AWG or larger. (d...
46 CFR 120.340 - Cable and wiring requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... for the circuit in which they are used; (2) Be installed in a manner to avoid or reduce interference... paragraph (b)(8) of this section. (c) Conductors in power and lighting circuits must be No. 14 American Wire Gauge (AWG) or larger. Conductors in control and indicator circuits must be No. 22 AWG or larger. (d...
46 CFR 120.340 - Cable and wiring requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... for the circuit in which they are used; (2) Be installed in a manner to avoid or reduce interference... paragraph (b)(8) of this section. (c) Conductors in power and lighting circuits must be No. 14 American Wire Gauge (AWG) or larger. Conductors in control and indicator circuits must be No. 22 AWG or larger. (d...
46 CFR 120.340 - Cable and wiring requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... for the circuit in which they are used; (2) Be installed in a manner to avoid or reduce interference... paragraph (b)(8) of this section. (c) Conductors in power and lighting circuits must be No. 14 American Wire Gauge (AWG) or larger. Conductors in control and indicator circuits must be No. 22 AWG or larger. (d...
Oligocene and Miocene larger foraminiferida from Australia and New Zealand
NASA Astrophysics Data System (ADS)
Chaproniere, G. C. H.
The lithostratigraphy, biostratigraphy and the systematics of larger foraminiferids at several Late Oligocene to Middle Miocene localities in Australia are described. In particular, sediments of this interval in the North West Cape area of the Carnarvon Basin, Western Australia, yielded diverse faunas of larger and planktic foraminiferids. Areas in New Zealand were also sampled and studied. Forty species and subspecies, representing 25 genera or subgenera of larger foraminiferids, were recorded. Wherever possible, biometric methods have been used to discriminate between taxa. Such studies suggest that the rates of evolution of some groups of larger foraminiferids in New Zealand were different from those in the Australian region. Among the taxa that are illustrated and described in detail are two subspecies of Lepidocyclina (Nephrolepidina) proposed as new: Lepidocyclina (Nephrolepidina) howchini praehowchini and Lepidocyclina (Nephrolepidina) orakeiensis waikukuensis. Topotypes of L. (N.) orakeiensis hornibrooki and L. (N.) howchini howchini are discussed and figured.
Gabrieli, Francesca; Rosi, Francesca; Vichi, Alessandra; Cartechini, Laura; Pensabene Buemi, Luciano; Kazarian, Sergei G; Miliani, Costanza
2017-01-17
Protrusions, efflorescence, delamination, and opacity decreasing are severe degradation phenomena affecting oil paints with zinc oxide, one of the most common white pigments of the 20th century. Responsible for these dramatic alterations are the Zn carboxylates (also known as Zn soaps) originated by the interaction of the pigment and the fatty acids resulting from the hydrolysis of glycerides in the oil binding medium. Despite their widespread occurrence in paintings and the growing interest of the scientific community, the process of formation and evolution of Zn soaps is not yet fully understood. In this study micro-attenuated total reflection (ATR)-FT-IR spectroscopic imaging was required for the investigation at the microscale level of the nature and distribution of Zn soaps in the painting Alchemy by J. Pollock (1947, Peggy Guggenheim Collection, Venice) and for comparison with artificially aged model samples. For both actual samples and models, the role of AlSt(OH) 2 , a jellifying agent commonly added in 20th century paint tube formulations, proved decisive for the formation of zinc stearate-like (ZnSt 2 ) soaps. It was observed that ZnSt 2 -like soaps first form around the added AlSt(OH) 2 particles and then eventually grow within the whole painting stratigraphy as irregularly shaped particles. In some of the Alchemy samples, and diversely from the models, a peculiar distribution of ZnSt 2 aggregates arranged as rounded and larger particles was also documented. Notably, in one of these samples, larger agglomerates of ZnSt 2 expanding toward the support of the painting were observed and interpreted as the early stage of the formation of internal protrusions. Micro-ATR-FT-IR spectroscopic imaging, thanks to a very high chemical specificity combined with high spatial resolution, was proved to give valuable information for assessing the conservation state of irreplaceable 20th century oil paintings, revealing the chemical distribution of Zn soaps within the paint stratigraphy before their effect becomes disruptive.
The Quasar Pairs Environment At z ∼ 0.5
NASA Astrophysics Data System (ADS)
Sandrinelli, Angela; Falomo, R.; Treves, A.; Scarpa, R.; Uslenghi, M.
2016-10-01
We analyze the environment of a sample of 20 quasar physical pairs at 0.4
Resistance Training Increases the Variability of Strength Test Scores
2009-06-08
standard deviations for pretest and posttest strength measurements. This information was recorded for every strength test used in a total of 377 samples...significant if the posttest standard deviation consistently was larger than the pretest standard deviation. This condition could be satisfied even if...the difference in the standard deviations was small. For example, the posttest standard deviation might be 1% larger than the pretest standard
Kim, Gloria; Chu, Renxin; Yousuf, Fawad; Tauhid, Shahamat; Stazzone, Lynn; Houtchens, Maria K; Stankiewicz, James M; Severson, Christopher; Kimbrough, Dorlan; Quintana, Francisco J; Chitnis, Tanuja; Weiner, Howard L; Healy, Brian C; Bakshi, Rohit
2017-11-01
The subcortical deep gray matter (DGM) develops selective, progressive, and clinically relevant atrophy in progressive forms of multiple sclerosis (PMS). This patient population is the target of active neurotherapeutic development, requiring the availability of outcome measures. We tested a fully automated MRI analysis pipeline to assess DGM atrophy in PMS. Consistent 3D T1-weighted high-resolution 3T brain MRI was obtained over one year in 19 consecutive patients with PMS [15 secondary progressive, 4 primary progressive, 53% women, age (mean±SD) 50.8±8.0 years, Expanded Disability Status Scale (median, range) 5.0, 2.0-6.5)]. DGM segmentation applied the fully automated FSL-FIRST pipeline ( http://fsl.fmrib.ox.ac.uk ). Total DGM volume was the sum of the caudate, putamen, globus pallidus, and thalamus. On-study change was calculated using a random-effects linear regression model. We detected one-year decreases in raw [mean (95% confidence interval): -0.749 ml (-1.455, -0.043), p = 0.039] and annualized [-0.754 ml/year (-1.492, -0.016), p = 0.046] total DGM volumes. A treatment trial for an intervention that would show a 50% reduction in DGM brain atrophy would require a sample size of 123 patients for a single-arm study (one-year run-in followed by one-year on-treatment). For a two-arm placebo-controlled one-year study, 242 patients would be required per arm. The use of DGM fraction required more patients. The thalamus, putamen, and globus pallidus, showed smaller effect sizes in their on-study changes than the total DGM; however, for the caudate, the effect sizes were somewhat larger. DGM atrophy may prove efficient as a short-term outcome for proof-of-concept neurotherapeutic trials in PMS.
The Impact of Asking Intention or Self-Prediction Questions on Subsequent Behavior
Wood, Chantelle; Conner, Mark; Miles, Eleanor; Sandberg, Tracy; Taylor, Natalie; Godin, Gaston; Sheeran, Paschal
2015-01-01
The current meta-analysis estimated the magnitude of the impact of asking intention and self-prediction questions on rates of subsequent behavior, and examined mediators and moderators of this question–behavior effect (QBE). Random-effects meta-analysis on 116 published tests of the effect indicated that intention/prediction questions have a small positive effect on behavior (d+ = 0.24). Little support was observed for attitude accessibility, cognitive dissonance, behavioral simulation, or processing fluency explanations of the QBE. Multivariate analyses indicated significant effects of social desirability of behavior/behavior domain (larger effects for more desirable and less risky behaviors), difficulty of behavior (larger effects for easy-to-perform behaviors), and sample type (larger effects among student samples). Although this review controls for co-occurrence of moderators in multivariate analyses, future primary research should systematically vary moderators in fully factorial designs. Further primary research is also needed to unravel the mechanisms underlying different variants of the QBE. PMID:26162771
ERIC Educational Resources Information Center
Cross, Jennifer Riedl; Fletcher, Kathryn L.; Speirs Neumeister, Kristie L.
2011-01-01
In this collective case study of caregiver behaviors with their toddlers, two-minute videotaped reading interactions were analyzed using a constant comparative method. Twenty-four caregiver-toddler dyads from a high-risk sample of children prenatally exposed to cocaine were selected from a larger sample because they represented the extremes of…
Rare earth element geochemistry of outcrop and core samples from the Marcellus Shale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noack, Clinton W.; Jain, Jinesh C.; Stegmeier, John
In this paper, we studied the geochemistry of the rare earth elements (REE) in eleven outcrop samples and six, depth-interval samples of a core from the Marcellus Shale. The REE are classically applied analytes for investigating depositional environments and inferring geochemical processes, making them of interest as potential, naturally occurring indicators of fluid sources as well as indicators of geochemical processes in solid waste disposal. However, little is known of the REE occurrence in the Marcellus Shale or its produced waters, and this study represents one of the first, thorough characterizations of the REE in the Marcellus Shale. In thesemore » samples, the abundance of REE and the fractionation of REE profiles were correlated with different mineral components of the shale. Namely, samples with a larger clay component were inferred to have higher absolute concentrations of REE but have less distinctive patterns. Conversely, samples with larger carbonate fractions exhibited a greater degree of fractionation, albeit with lower total abundance. Further study is necessary to determine release mechanisms, as well as REE fate-and-transport, however these results have implications for future brine and solid waste management applications.« less
Rare earth element geochemistry of outcrop and core samples from the Marcellus Shale
Noack, Clinton W.; Jain, Jinesh C.; Stegmeier, John; ...
2015-06-26
In this paper, we studied the geochemistry of the rare earth elements (REE) in eleven outcrop samples and six, depth-interval samples of a core from the Marcellus Shale. The REE are classically applied analytes for investigating depositional environments and inferring geochemical processes, making them of interest as potential, naturally occurring indicators of fluid sources as well as indicators of geochemical processes in solid waste disposal. However, little is known of the REE occurrence in the Marcellus Shale or its produced waters, and this study represents one of the first, thorough characterizations of the REE in the Marcellus Shale. In thesemore » samples, the abundance of REE and the fractionation of REE profiles were correlated with different mineral components of the shale. Namely, samples with a larger clay component were inferred to have higher absolute concentrations of REE but have less distinctive patterns. Conversely, samples with larger carbonate fractions exhibited a greater degree of fractionation, albeit with lower total abundance. Further study is necessary to determine release mechanisms, as well as REE fate-and-transport, however these results have implications for future brine and solid waste management applications.« less
P wave dispersion in patients with hypochondriasis.
Atmaca, Murad; Korkmaz, Hasan; Korkmaz, Sevda
2010-11-26
P wave dispersion (Pd), defined as the difference between the maximum and the minimum P wave duration, has been associated with anxiety. Thus, we wondered whether Pd in hypochondriasis which is associated with anxiety differed from that in healthy controls. Pd was measured in 30 hypochondriac patients and same number of physically and mentally healthy age- and gender-matched controls. Hamilton Depression Rating (HDRS) and Hamilton Anxiety Rating Scales (HARS) were scored. The heart rate and left atrium (LA) sizes were not significantly different between groups. However, both Pmax and Pmin values of the patients were significantly higher than those of healthy controls. As for the main variable investigated in the present study, the corrected Pd was significantly longer in the patient group compared to control group. On the basis of this study, we can conclude that Pd may be related to hypochondriasis though our sample is too small to allow us to obtain a clear conclusion. Future studies with larger sample evaluating the effects of treatment are required. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Renal echo-3D and microalbuminuria in children of diabetic mothers: a preliminary study.
Cappuccini, B; Torlone, E; Ferri, C; Arnone, S; Troiani, S; Bini, V; Bellomo, G; Barboni, G; Di Renzo, G
2013-08-01
Maternal diabetes has assumed epidemic relevance in recent years and animal studies have provided some evidence that it may cause abnormalities in renal development and a reduction in nephron endowment in the offspring; however, human data are lacking. The renal cortex contains ∼95% of the glomeruli and its volume could be taken as a surrogate measure of glomerular number; based on this assumption, we measured renal cortex volume and in addition, microalbuminuria in a homogeneous sample of 42 children of diabetic (pregestational, n = 13, and gestational, n = 29) mothers, compared with 21 healthy children born of non-diabetic mothers. The offspring of diabetic mothers showed a significant reduction of renal cortex volume and higher albumin excretion compared with controls, possibly attributable to a reduction in the number of nephrons and the difference was statistically significant (P < 0.001). Although further studies on a larger sample are necessary, our preliminary findings suggest that maternal diabetes may affect renal development with sequelae later in life, requiring closer monitoring and follow-up. Furthermore, the importance of strict maternal diabetes management and control must be emphasized.
Effect of Prestrain on Precipitation Behaviors of Ti-2.5Cu Alloy
NASA Astrophysics Data System (ADS)
Lincai, Zhang; Xiaoming, Ding; Wei, Ye; Man, Zhang; Zhenya, Song
2018-04-01
As a special hardenable α titanium alloy, Ti-2.5 Cu alloy was a candidate material for high temperature components requiring high strength and plasticity. The effect of prestrain on the precipitation behaviors was investigated in the present study. Tensile tests show that elongation up to 22 % can be obtained after solid solution (SS) treatment. Thereafter, prestrain in tension with 5 %, 10 %, 15 % and 20 % was carried out for the SS samples and then duplex aging was applied. Transmitting electron microscopy (TEM) investigations show that larger Ti2Cu particles were observed in the prestrained condition than free aging one, as prestrain significantly speeds up the precipitation kinetics. The strength firstly increases and then decreases for the prestrained samples after duplex aging, where the competition between precipitation hardening and recovery softening should be responsible. With the consideration of SS, precipitation and recovery, a strength model for duplex aging combined with prestrain was established, which is in well agreement with experiments. Present study may provide a promising way to obtain the strength of deformed hcp materials in industry application.
Kumar, Saurabh; Singh, Swarndeep; Parmar, Arpit; Verma, Rohit; Kumar, Nand
2018-05-01
To explore the role of dorsolateral prefrontal cortex (DLPFC) stimulation in the treatment of panic disorder with comorbid depression. The present study reports findings from retrospective analysis of 13 treatment-resistant patients diagnosed with comorbid panic disorder and depression, given 20 sessions of high-frequency transcranial magnetic stimulation (rTMS) over left-DLPFC over a period of 1 month. There was a significant reduction in both the panic and depressive symptom severity, assessed by applying Panic Disorder Severity Scale (PDSS) and Hamilton Depression Rating Scale (HDRS) at baseline and after 20 sessions of rTMS. There was a 38% and 40% reduction of PDSS and HDRS scores, respectively, in the sample. The changes in PDSS and HDRS scores were not significantly correlated (ρ = -0.103, p = 0.737). High-frequency rTMS delivered at left-DLPFC may have a potential role in treatment of comorbid panic disorder and depression. Future studies done on a larger sample in a controlled environment are required to establish its role.
NASA Astrophysics Data System (ADS)
Tubman, Norm; Whaley, Birgitta
The development of exponential scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, allows exact diagonalization through stochastically sampling of determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, together with a stochastic projected wave function, which are used to explore the important parts of Hilbert space. However, a stochastic representation of the wave function is not required to search Hilbert space efficiently and new deterministic approaches have recently been shown to efficiently find the important parts of determinant space. We shall discuss the technique of Adaptive Sampling Configuration Interaction (ASCI) and the related heat-bath Configuration Interaction approach for ground state and excited state simulations. We will present several applications for strongly correlated Hamiltonians. This work was supported through the Scientific Discovery through Advanced Computing (SciDAC) program funded by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences.
1H magic-angle spinning NMR evolves as a powerful new tool for membrane proteins
NASA Astrophysics Data System (ADS)
Schubeis, Tobias; Le Marchand, Tanguy; Andreas, Loren B.; Pintacuda, Guido
2018-02-01
Building on a decade of continuous advances of the community, the recent development of very fast (60 kHz and above) magic-angle spinning (MAS) probes has revolutionised the field of solid-state NMR. This new spinning regime reduces the 1H-1H dipolar couplings, so that direct detection of the larger magnetic moment available from 1H is now possible at high resolution, not only in deuterated molecules but also in fully-protonated substrates. Such capabilities allow rapid "fingerprinting" of samples with a ten-fold reduction of the required sample amounts with respect to conventional approaches, and permit extensive, robust and expeditious assignment of small-to-medium sized proteins (up to ca. 300 residues), and the determination of inter-nuclear proximities, relative orientations of secondary structural elements, protein-cofactor interactions, local and global dynamics. Fast MAS and 1H detection techniques have nowadays been shown to be applicable to membrane-bound systems. This paper reviews the strategies underlying this recent leap forward in sensitivity and resolution, describing its potential for the detailed characterization of membrane proteins.
The evolution in the stellar mass of brightest cluster galaxies over the past 10 billion years
NASA Astrophysics Data System (ADS)
Bellstedt, Sabine; Lidman, Chris; Muzzin, Adam; Franx, Marijn; Guatelli, Susanna; Hill, Allison R.; Hoekstra, Henk; Kurinsky, Noah; Labbe, Ivo; Marchesini, Danilo; Marsan, Z. Cemile; Safavi-Naeini, Mitra; Sifón, Cristóbal; Stefanon, Mauro; van de Sande, Jesse; van Dokkum, Pieter; Weigel, Catherine
2016-08-01
Using a sample of 98 galaxy clusters recently imaged in the near-infrared with the European Southern Observatory (ESO) New Technology Telescope, WIYN telescope and William Herschel Telescope, supplemented with 33 clusters from the ESO archive, we measure how the stellar mass of the most massive galaxies in the universe, namely brightest cluster galaxies (BCGs), increases with time. Most of the BCGs in this new sample lie in the redshift range 0.2 < z < 0.6, which has been noted in recent works to mark an epoch over which the growth in the stellar mass of BCGs stalls. From this sample of 132 clusters, we create a subsample of 102 systems that includes only those clusters that have estimates of the cluster mass. We combine the BCGs in this subsample with BCGs from the literature, and find that the growth in stellar mass of BCGs from 10 billion years ago to the present epoch is broadly consistent with recent semi-analytic and semi-empirical models. As in other recent studies, tentative evidence indicates that the stellar mass growth rate of BCGs may be slowing in the past 3.5 billion years. Further work in collecting larger samples, and in better comparing observations with theory using mock images, is required if a more detailed comparison between the models and the data is to be made.
Spitzer Observations of GRB Hosts: A Legacy Approach
NASA Astrophysics Data System (ADS)
Perley, Daniel; Tanvir, Nial; Hjorth, Jens; Berger, Edo; Laskar, Tanmoy; Michalowski, Michal; Chary, Ranga-Ram; Fynbo, Johan; Levan, Andrew
2012-09-01
The host galaxies of long-duration GRBs are drawn from uniquely broad range of luminosities and redshifts. Thus they offer the possibility of studying the evolution of star-forming galaxies without the limitations of other luminosity-selected samples, which typically are increasingly biased towards the most massive systems at higher redshift. However, reaping the full benefits of this potential requires careful attention to the selection biases affecting host identification. To this end, we propose observations of a Legacy sample of 70 GRB host galaxies (an additional 70 have already been observed by Spitzer), in order to constrain the mass and luminosity function in GRB-selected galaxies at high redshift, including its dependence on redshift and on properties of the afterglow. Crucially, and unlike previous Spitzer surveys, this sample is carefully designed to be uniform and free of optical selection biases that have caused previous surveys to systematically under-represent the role of luminous, massive hosts. We also propose to extend to larger, more powerfully constraining samples the study of two science areas where Spitzer observations have recently shown spectacular success: the hosts of dust-obscured GRBs (which promise to further our understanding of the connection between GRBs and star-formation in the most luminous galaxies), and the evolution of the mass-metallicity relation at z>2 (for which GRB host observations provide particularly powerful constraints on high-z chemical evolution).
NASA Astrophysics Data System (ADS)
Palihawadana Arachchige, Maheshika; Nemala, Humeshkar; Naik, Vaman; Naik, Ratna
Magnetic hyperthermia (MHT) has a great potential as a non-invasive cancer therapy technique. Specific absorption rate (SAR) which measures the efficiency of heat generation, mainly depends on magnetic properties of nanoparticles such as saturation magnetization (Ms) and magnetic anisotropy (K) which depend on the size and shape. Therefore, MHT applications of magnetic nanoparticles often require a controllable synthesis to achieve desirable magnetic properties. We have synthesized Fe3O4 nanoparticles using two different methods, co-precipitation (CP) and hydrothermal (HT) techniques to produce similar XRD crystallite size of 12 nm, and subsequently coated with dextran to prepare ferrofluids for MHT. However, TEM measurements show average particle sizes of 13.8 +/-3.6 nm and 14.6 +/-3.6 nm for HT and CP samples, implying the existence of an amorphous surface layer for both. The MHT data show the two samples have very different SAR values of 110 W/g (CP) and 40W/g (HT) at room temperature, although they have similar Ms of 70 +/-4 emu/g regardless of their different TEM sizes. We fitted the temperature dependent SAR using linear response theory to explain the observed results. CP sample shows a larger magnetic core with a narrow size distribution and a higher K value compared to that of HT sample.
Goodwin, Richard J A; Nilsson, Anna; Borg, Daniel; Langridge-Smith, Pat R R; Harrison, David J; Mackay, C Logan; Iverson, Suzanne L; Andrén, Per E
2012-08-30
Analysis of whole animal tissue sections by MALDI MS imaging (MSI) requires effective sample collection and transfer methods to allow the highest quality of in situ analysis of small or hard to dissect tissues. We report on the use of double-sided adhesive conductive carbon tape during whole adult rat tissue sectioning of carboxymethyl cellulose (CMC) embedded animals, with samples mounted onto large format conductive glass and conductive plastic MALDI targets, enabling MSI analysis to be performed on both TOF and FT-ICR MALDI mass spectrometers. We show that mounting does not unduly affect small molecule MSI detection by analyzing tiotropium abundance and distribution in rat lung tissues, with direct on-tissue quantitation achieved. Significantly, we use the adhesive tape to provide support to embedded delicate heat-stabilized tissues, enabling sectioning and mounting to be performed that maintained tissue integrity on samples that had previously been impossible to adequately prepare section for MSI analysis. The mapping of larger peptidomic molecules was not hindered by tape mounting samples and we demonstrate this by mapping the distribution of PEP-19 in both native and heat-stabilized rat brains. Furthermore, we show that without heat stabilization PEP-19 degradation fragments can detected and identified directly by MALDI MSI analysis. Copyright © 2012 Elsevier B.V. All rights reserved.
The effects of physical and chemical preprocessing on the flowability of corn stover
Crawford, Nathan C.; Nagle, Nick; Sievers, David A.; ...
2015-12-20
Continuous and reliable feeding of biomass is essential for successful biofuel production. However, the challenges associated with biomass solids handling are commonly overlooked. In this study, we examine the effects of preprocessing (particle size reduction, moisture content, chemical additives, etc.) on the flow properties of corn stover. Compressibility, flow properties (interparticle friction, cohesion, unconfined yield stress, etc.), and wall friction were examined for five corn stover samples: ground, milled (dry and wet), acid impregnated, and deacetylated. The ground corn stover was found to be the least compressible and most flowable material. The water and acid impregnated stovers had similar compressibilities.more » Yet, the wet corn stover was less flowable than the acid impregnated sample, which displayed a flow index equivalent to the dry, milled corn stover. The deacetylated stover, on the other hand, was the most compressible and least flowable examined material. However, all of the tested stover samples had internal friction angles >30°, which could present additional feeding and handling challenges. All of the ''wetted'' materials (water, acid, and deacetylated) displayed reduced flowabilities (excluding the acid impregnated sample), and enhanced compressibilities and wall friction angles, indicating the potential for added handling issues; which was corroborated via theoretical hopper design calculations. All of the ''wetted'' corn stovers require larger theoretical hopper outlet diameters and steeper hopper walls than the examined ''dry'' stovers.« less
The role of the RAS pathway in iAMP21-ALL
Ryan, S L; Matheson, E; Grossmann, V; Sinclair, P; Bashton, M; Schwab, C; Towers, W; Partington, M; Elliott, A; Minto, L; Richardson, S; Rahman, T; Keavney, B; Skinner, R; Bown, N; Haferlach, T; Vandenberghe, P; Haferlach, C; Santibanez-Koref, M; Moorman, A V; Kohlmann, A; Irving, J A E; Harrison, C J
2016-01-01
Intrachromosomal amplification of chromosome 21 (iAMP21) identifies a high-risk subtype of acute lymphoblastic leukaemia (ALL), requiring intensive treatment to reduce their relapse risk. Improved understanding of the genomic landscape of iAMP21-ALL will ascertain whether these patients may benefit from targeted therapy. We performed whole-exome sequencing of eight iAMP21-ALL samples. The mutation rate was dramatically disparate between cases (average 24.9, range 5–51) and a large number of novel variants were identified, including frequent mutation of the RAS/MEK/ERK pathway. Targeted sequencing of a larger cohort revealed that 60% (25/42) of diagnostic iAMP21-ALL samples harboured 42 distinct RAS pathway mutations. High sequencing coverage demonstrated heterogeneity in the form of multiple RAS pathway mutations within the same sample and diverse variant allele frequencies (VAFs) (2–52%), similar to other subtypes of ALL. Constitutive RAS pathway activation was observed in iAMP21 samples that harboured mutations in the predominant clone (⩾35% VAF). Viable iAMP21 cells from primary xenografts showed reduced viability in response to the MEK1/2 inhibitor, selumetinib, in vitro. As clonal (⩾35% VAF) mutations were detected in 26% (11/42) of iAMP21-ALL, this evidence of response to RAS pathway inhibitors may offer the possibility to introduce targeted therapy to improve therapeutic efficacy in these high-risk patients. PMID:27168466
Insight into Primordial Solar System Oxygen Reservoirs from Returned Cometary Samples
NASA Technical Reports Server (NTRS)
Brownlee, D. E.; Messenger, S.
2004-01-01
The recent successful rendezvous of the Stardust spacecraft with comet Wild-2 will be followed by its return of cometary dust to Earth in January 2006. Results from two separate dust impact detectors suggest that the spacecraft collected approximately the nominal fluence of at least 1,000 particles larger than 15 micrometers in size. While constituting only about one microgram total, these samples will be sufficient to answer many outstanding questions about the nature of cometary materials. More than two decades of laboratory studies of stratospherically collected interplanetary dust particles (IDPs) of similar size have established the necessary microparticle handling and analytical techniques necessary to study them. It is likely that some IDPs are in fact derived from comets, although complex orbital histories of individual particles have made these assignments difficult to prove. Analysis of bona fide cometary samples will be essential for answering some fundamental outstanding questions in cosmochemistry, such as (1) the proportion of interstellar and processed materials that comprise comets and (2) whether the Solar System had a O-16-rich reservoir. Abundant silicate stardust grains have recently been discovered in anhydrous IDPs, in far greater abundances (200 5,500 ppm) than those in meteorites (25 ppm). Insight into the more subtle O isotopic variations among chondrites and refractory phases will require significantly higher precision isotopic measurements on micrometer-sized samples than are currently available.
Wang, Xiao-Bo; Yin, Yan; Miao, Yuan; Eberhardt, Ralf; Hou, Gang; Herth, Felix J; Kang, Jian
2016-11-01
Diagnosing pleural effusion is challenging, especially in patients with malignant or benign fibrothorax, which is difficult to sample using standard flexible forceps (SFF) via flex-rigid pleuroscopy. An adequate sample is crucial for the differential diagnosis of malignant fibrothorax (malignant pleural mesothelioma, metastatic lung carcinoma, etc.) from benign fibrothorax (benign asbestos pleural disease, tuberculous pleuritis, etc.). Novel biopsy techniques are required in flex-rigid pleuroscopy to improve the sample size and quality. The SB knife Jr, which is a scissor forceps that uses a mono-pole high frequency, was developed to allow convenient and accurate resection of larger lesions during endoscopic dissection (ESD). Herein, we report two patients with fibrothorax who underwent a pleural biopsy using an SB knife Jr to investigate the potential use of this tool in flex-rigid pleuroscopy when pleural lesions are difficult to biopsy via SFF. The biopsies were successful, with sufficient size and quality for definitive diagnosis. We also successfully performed adhesiolysis with the SB knife Jr in one case, and adequate biopsies were conducted. No complications were observed. Electrosurgical biopsy with the SB knife Jr during flex-rigid pleuroscopy allowed us to obtain adequate samples for the diagnosis of malignant versus benign fibrothorax, which is usually not possible with SFF. The SB knife Jr also demonstrated a potential use for pleuropulmonary adhesions.
The effects of physical and chemical preprocessing on the flowability of corn stover
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, Nathan C.; Nagle, Nick; Sievers, David A.
Continuous and reliable feeding of biomass is essential for successful biofuel production. However, the challenges associated with biomass solids handling are commonly overlooked. In this study, we examine the effects of preprocessing (particle size reduction, moisture content, chemical additives, etc.) on the flow properties of corn stover. Compressibility, flow properties (interparticle friction, cohesion, unconfined yield stress, etc.), and wall friction were examined for five corn stover samples: ground, milled (dry and wet), acid impregnated, and deacetylated. The ground corn stover was found to be the least compressible and most flowable material. The water and acid impregnated stovers had similar compressibilities.more » Yet, the wet corn stover was less flowable than the acid impregnated sample, which displayed a flow index equivalent to the dry, milled corn stover. The deacetylated stover, on the other hand, was the most compressible and least flowable examined material. However, all of the tested stover samples had internal friction angles >30°, which could present additional feeding and handling challenges. All of the ''wetted'' materials (water, acid, and deacetylated) displayed reduced flowabilities (excluding the acid impregnated sample), and enhanced compressibilities and wall friction angles, indicating the potential for added handling issues; which was corroborated via theoretical hopper design calculations. All of the ''wetted'' corn stovers require larger theoretical hopper outlet diameters and steeper hopper walls than the examined ''dry'' stovers.« less
Monitoring the impact of Bt maize on butterflies in the field: estimation of required sample sizes.
Lang, Andreas
2004-01-01
The monitoring of genetically modified organisms (GMOs) after deliberate release is important in order to assess and evaluate possible environmental effects. Concerns have been raised that the transgenic crop, Bt maize, may affect butterflies occurring in field margins. Therefore, a monitoring of butterflies was suggested accompanying the commercial cultivation of Bt maize. In this study, baseline data on the butterfly species and their abundance in maize field margins is presented together with implications for butterfly monitoring. The study was conducted in Bavaria, South Germany, between 2000-2002. A total of 33 butterfly species was recorded in field margins. A small number of species dominated the community, and butterflies observed were mostly common species. Observation duration was the most important factor influencing the monitoring results. Field margin size affected the butterfly abundance, and habitat diversity had a tendency to influence species richness. Sample size and statistical power analyses indicated that a sample size in the range of 75 to 150 field margins for treatment (transgenic maize) and control (conventional maize) would detect (power of 80%) effects larger than 15% in species richness and the butterfly abundance pooled across species. However, a much higher number of field margins must be sampled in order to achieve a higher statistical power, to detect smaller effects, and to monitor single butterfly species.
Speciation of Mercury in Selected Areas of the Petroleum Value Chain.
Avellan, Astrid; Stegemeier, John P; Gai, Ke; Dale, James; Hsu-Kim, Heileen; Levard, Clément; O'Rear, Dennis; Hoelen, Thomas P; Lowry, Gregory V
2018-02-06
Petroleum, natural gas, and natural gas condensate can contain low levels of mercury (Hg). The speciation of Hg can affect its behavior during processing, transport, and storage so efficient and safe management of Hg requires an understanding of its chemical form in oil, gas and byproducts. Here, X-ray absorption spectroscopy was used to determine the Hg speciation in samples of solid residues collected throughout the petroleum value chain including stabilized crude oil residues, sediments from separation tanks and condensate glycol dehydrators, distillation column pipe scale, and biosludge from wastewater treatment. In all samples except glycol dehydrators, metacinnabar (β-HgS) was the primary form of Hg. Electron microscopy on particles from a crude sediment showed nanosized (<100 nm) particles forming larger aggregates, and confirmed the colocalization of Hg and sulfur. In sediments from glycol dehydrators, organic Hg(SR) 2 accounted for ∼60% of the Hg, with ∼20% present as β-HgS and/or Hg(SR) 4 species. β-HgS was the predominant Hg species in refinery biosludge and pipe scale samples. However, the balance of Hg species present in these samples depended on the nature of the crude oil being processed, i.e. sweet (low sulfur crudes) vs sour (higher sulfur crudes). This information on Hg speciation in the petroleum value chain will inform development of better engineering controls and management practices for Hg.
A comparison of ARMS-Plus and droplet digital PCR for detecting EGFR activating mutations in plasma
Zhang, Xinxin; Chang, Ning; Yang, Guohua; Zhang, Yong; Ye, Mingxiang; Cao, Jing; Xiong, Jie; Han, Zhiping; Wu, Shuo; Shang, Lei; Zhang, Jian
2017-01-01
In this study, we introduce a novel amplification refractory mutation system (ARMS)-based assay, namely ARMS-Plus, for the detection of epidermal growth factor receptor (EGFR) mutations in plasma samples. We evaluated the performance of ARMS-Plus in comparison with droplet digital PCR (ddPCR) and assessed the significance of plasma EGFR mutations in predicting efficacy of EGFR-tyrosine kinase inhibitor (TKI) regimen. A total of 122 advanced non-small cell lung cancer (NSCLC) patients were enrolled in this study. The tumor tissue samples from these patients were evaluated by conventional ARMS PCR method to confirm their EGFR mutation status. For the 116 plasma samples analyzed by ARMS-Plus, the sensitivity, specificity, and concordance rate were 77.27% (34/44), 97.22% (70/72), and 89.66% (104/116; κ=0.77, P<0.0001), respectively. Among the 71 plasma samples analyzed by both ARMS-Plus and ddPCR, ARMS-Plus showed a higher sensitivity than ddPCR (83.33% versus 70.83%). The presence of EGFR activating mutations in plasma was not associated with the response to EGFR-TKI, although further validation with a larger cohort is required to confirm the correlation. Collectively, the performance of ARMS-Plus and ddPCR are comparable. ARMS-Plus could be a potential alternative to tissue genotyping for the detection of plasma EGFR mutations in NSCLC patients. PMID:29340107
Baxter, Amanda J.; Hughes, Maria Celia; Kvaskoff, Marina; Siskind, Victor; Shekar, Sri; Aitken, Joanne F.; Green, Adele C.; Duffy, David L.; Hayward, Nicholas K.; Martin, Nicholas G.; Whiteman, David C.
2013-01-01
Cutaneous malignant melanoma (CMM) is a major health issue in Queensland, Australia which has the world’s highest incidence. Recent molecular and epidemiologic studies suggest that CMM arises through multiple etiological pathways involving gene-environment interactions. Understanding the potential mechanisms leading to CMM requires larger studies than those previously conducted. This article describes the design and baseline characteristics of Q-MEGA, the Queensland study of Melanoma: Environmental and Genetic Associations, which followed-up four population-based samples of CMM patients in Queensland, including children, adolescents, men aged over 50, and a large sample of adult cases and their families, including twins. Q-MEGA aims to investigate the roles of genetic and environmental factors, and their interaction, in the etiology of melanoma. 3,471 participants took part in the follow-up study and were administered a computer-assisted telephone interview in 2002–2005. Updated data on environmental and phenotypic risk factors, and 2,777 blood samples were collected from interviewed participants as well as a subset of relatives. This study provides a large and well-described population-based sample of CMM cases with follow-up data. Characteristics of the cases and repeatability of sun exposure and phenotype measures between the baseline and the follow-up surveys, from six to 17 years later, are also described. PMID:18361720
DOE Office of Scientific and Technical Information (OSTI.GOV)
Machida, Emi; Research Fellowships of the Japan Society for the Promotion of Science, Japan Society for the Promotion of Science, 1-8 Chiyoda, Tokyo 102-8472; Horita, Masahiro
2012-12-17
We propose a low-temperature laser annealing method of a underwater laser annealing (WLA) for polycrystalline silicon (poly-Si) films. We performed crystallization to poly-Si films by laser irradiation in flowing deionized-water where KrF excimer laser was used for annealing. We demonstrated that the maximum value of maximum grain size of WLA samples was 1.5 {mu}m, and that of the average grain size was 2.8 times larger than that of conventional laser annealing in air (LA) samples. Moreover, WLA forms poly-Si films which show lower conductivity and larger carrier life time attributed to fewer electrical defects as compared to LA poly-Si films.
Accelerated radial Fourier-velocity encoding using compressed sensing.
Hilbert, Fabian; Wech, Tobias; Hahn, Dietbert; Köstler, Herbert
2014-09-01
Phase Contrast Magnetic Resonance Imaging (MRI) is a tool for non-invasive determination of flow velocities inside blood vessels. Because Phase Contrast MRI only measures a single mean velocity per voxel, it is only applicable to vessels significantly larger than the voxel size. In contrast, Fourier Velocity Encoding measures the entire velocity distribution inside a voxel, but requires a much longer acquisition time. For accurate diagnosis of stenosis in vessels on the scale of spatial resolution, it is important to know the velocity distribution of a voxel. Our aim was to determine velocity distributions with accelerated Fourier Velocity Encoding in an acquisition time required for a conventional Phase Contrast image. We imaged the femoral artery of healthy volunteers with ECG-triggered, radial CINE acquisition. Data acquisition was accelerated by undersampling, while missing data were reconstructed by Compressed Sensing. Velocity spectra of the vessel were evaluated by high resolution Phase Contrast images and compared to spectra from fully sampled and undersampled Fourier Velocity Encoding. By means of undersampling, it was possible to reduce the scan time for Fourier Velocity Encoding to the duration required for a conventional Phase Contrast image. Acquisition time for a fully sampled data set with 12 different Velocity Encodings was 40 min. By applying a 12.6-fold retrospective undersampling, a data set was generated equal to 3:10 min acquisition time, which is similar to a conventional Phase Contrast measurement. Velocity spectra from fully sampled and undersampled Fourier Velocity Encoded images are in good agreement and show the same maximum velocities as compared to velocity maps from Phase Contrast measurements. Compressed Sensing proved to reliably reconstruct Fourier Velocity Encoded data. Our results indicate that Fourier Velocity Encoding allows an accurate determination of the velocity distribution in vessels in the order of the voxel size. Thus, compared to normal Phase Contrast measurements delivering only mean velocities, no additional scan time is necessary to retrieve meaningful velocity spectra in small vessels. Copyright © 2013. Published by Elsevier GmbH.
Kuesap, Jiraporn; Na-Bangchang, Kesara
2018-04-01
Malaria is one of the most important public health problems in tropical areas on the globe. Several factors are associated with susceptibility to malaria and disease severity, including innate immunity such as blood group, hemoglobinopathy, and heme oxygenase-1 (HO-1) polymorphisms. This study was carried out to investigate association among ABO blood group, thalassemia types and HO-1 polymorphisms in malaria. The malarial blood samples were collected from patients along the Thai-Myanmar border. Determination of ABO blood group, thalassemia variants, and HO-1 polymorphisms were performed using agglutination test, low pressure liquid chromatography and polymerase chain reaction, respectively. Plasmodium vivax was the major infected malaria species in the study samples. Distribution of ABO blood type in the malaria-infected samples was similar to that in healthy subjects, of which blood type O being most prevalent. Association between blood group A and decreased risk of severe malaria was significant. Six thalassemia types (30%) were detected, i.e. , hemoglobin E (HbE), β-thalassemia, α-thalassemia 1, α-thalassemia 2, HbE with α-thalassemia 2, and β-thalassemia with α-thalassemia 2. Malaria infected samples without thalassemia showed significantly higher risk to severe malaria. The prevalence of HO-1 polymorphisms, S/S, S/L and L/L were 25, 62, and 13%, respectively. Further study with larger sample size is required to confirm the impact of these 3 host genetic factors in malaria patients.
Automated high-throughput protein purification using an ÄKTApurifier and a CETAC autosampler.
Yoo, Daniel; Provchy, Justin; Park, Cynthia; Schulz, Craig; Walker, Kenneth
2014-05-30
As the pace of drug discovery accelerates there is an increased focus on screening larger numbers of protein therapeutic candidates to identify those that are functionally superior and to assess manufacturability earlier in the process. Although there have been advances toward high throughput (HT) cloning and expression, protein purification is still an area where improvements can be made to conventional techniques. Current methodologies for purification often involve a tradeoff between HT automation or capacity and quality. We present an ÄKTA combined with an autosampler, the ÄKTA-AS, which has the capability of purifying up to 240 samples in two chromatographic dimensions without the need for user intervention. The ÄKTA-AS has been shown to be reliable with sample volumes between 0.5 mL and 100 mL, and the innovative use of a uniquely configured loading valve ensures reliability by efficiently removing air from the system as well as preventing sample cross contamination. Incorporation of a sample pump flush minimizes sample loss and enables recoveries ranging from the low tens of micrograms to milligram quantities of protein. In addition, when used in an affinity capture-buffer exchange format the final samples are formulated in a buffer compatible with most assays without requirement of additional downstream processing. The system is designed to capture samples in 96-well microplate format allowing for seamless integration of downstream HT analytic processes such as microfluidic or HPLC analysis. Most notably, there is minimal operator intervention to operate this system, thereby increasing efficiency, sample consistency and reducing the risk of human error. Copyright © 2014 Elsevier B.V. All rights reserved.
Use of an Additional 19-G EBUS-TBNA Needle Increases the Diagnostic Yield of EBUS-TBNA.
Garrison, Garth; Leclair, Timothy; Balla, Agnes; Wagner, Sarah; Butnor, Kelly; Anderson, Scott R; Kinsey, C Matthew
2018-06-12
Although endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) has an excellent diagnostic yield, there remain cases where the diagnosis is not obtained. We hypothesized that additional sampling with a 19-G EBUS-TBNA needle may increase diagnostic yield in a subset of cases where additional tissue sampling was required. Indications for use of the 19-G needle following 22-G sampling with rapid on-site cytologic examination were: (1) diagnostic uncertainty of the on-site cytopathologist (eg, nondiagnostic, probable lymphoma, etc.), (2) non-small cell lung cancer with probable need for molecular genetic and/or PD-L1 testing, or (3) need for a larger tissue sample for consideration of inclusion in a research protocol. A 19-G EBUS-TBNA needle was utilized following standard sampling with a 22-G needle in 48 patients (50 sites) during the same procedure. Although the diagnostic yield between the needles was equivalent, the concordance rate was only 83%. The 19-G determined a diagnosis in 4 additional patients (8%) and provided additional histopathologic information in 6 other cases (12%). Conversely, in 3 cases (6%) diagnostic information was provided only by the 22-G needle. Compared with 22-G EBUS-TBNA alone, sampling with both the 22- and 19-G EBUS needles resulted in an increase in diagnostic yield from 92% to 99% (P=0.045) and a number needed to sample of 13 patients to provide one additional diagnosis. There were no significant complications. In select cases where additional tissue may be needed, sampling with a 19-G EBUS needle following standard aspiration with a 22-G needle results in an increase in diagnostic yield.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-27
... SNDA was conducted. Larger proportions of elementary schools met the standards for total fat and... Certification of Compliance With Meal Requirements for the National School Lunch Program Under the Healthy.... SUMMARY: This interim rule amends National School Lunch Program regulations to conform to requirements...
Ohashi, Nobuko; Imai, Hidekazu; Seino, Yutaka; Baba, Hiroshi
2017-12-06
Determination of the appropriate tracheal tube size using formulas based on age or height often is inaccurate in pediatric patients with congenital heart disease (CHD), particularly in those with high pulmonary arterial pressure (PAP). Here, the authors compared tracheal diameters between pediatric patients with CHD with high PAP and low PAP. Retrospective clinical study. Hospital. Pediatric patients, from birth to 6 months of age, requiring general anesthesia and tracheal intubation who underwent computed tomography were included. Patients with mean pulmonary artery pressure >25 mmHg were allocated to the high PAP group, and the remaining patients were allocated to the low PAP group. The primary outcome was the tracheal diameter at the cricoid cartilage level, and the secondary goal was to observe whether the size of the tracheal tube was appropriate compared with that obtained using predictable formulas based on age or height. The mean tracheal diameter was significantly larger in the high PAP group than in the low PAP group (p < 0.01). Pediatric patients with high PAP required a larger tracheal tube size than predicted by formulas based on age or height (p = 0.04 for age and height). Pediatric patients with high PAP had larger tracheal diameters than those with low PAP and required larger tracheal tubes compared with the size predicted using formulas based on age or height. Copyright © 2017 Elsevier Inc. All rights reserved.
Understanding cross sample talk as a result of triboelectric charging on future mars missions
NASA Astrophysics Data System (ADS)
Beegle, L. W.; Anderson, R. C.; Fleming, G.
2009-12-01
Proper scientific analysis requires the material that is collected and analyzed by in-situ instruments be as close as possible (chemically and mineralogically) to the initial, unaltered surface material prior to its collection and delivery. However this is not always possible for automated robotic in situ analysis. Therefore it is vital to understanding how the sample has been changed/altered prior to analysis so that analysis can be put in the proper context. We have examined the transport of fines when transferred under ambient martian conditions in hardware analogous to that being developed for the Mars Science Laboratory (MSL) sample acquisition flight hardware. We will discuss the amount of cross sample contamination when different mineralogy’s are transferred under Martian environmental conditions. Similar issues have been identified as problems within the terrestrial mining, textile, and pharmaceutical research communities that may alter/change the chemical and mineralogical compositions of samples before they are delivered to the MSL Chemistry and Mineralogy (CheMin) and the Sample Analysis at Mars (SAM) analytical instruments. These cross-sample contamination will affect the overall quality of the science results and each of these processes need to be examined and understood prior to MSL landing on the surface of Mars. There are two forms of triboelectric charging that have been observed to occur on Earth and they are 1) when dissimilar material comes in contact (one material charges positive and the other negative depending on their relative positions on the triboelectric series and the work function of the material) and 2) when two similar materials come in contact, the larger particles can transfer one of their high energy electrons to a smaller particle. During the collisions, the transferred electron tends to lose energy and the charge tends not to move from the smaller particle back to the larger particle in further collisions. This transfer effect can occur multiple times on particles resulting in multiple charge states occurring on particles. While individual particles can have different charge sign, the bulk material can become charged due to contact of different minerals constituents in the sample and through contact of the wall. This results in a very complex system that has yet to be fully understood and characterized. We have begun to develop a characterize a data set which enable scientists to better relate arm and mast mounted measurements made on the surface by the Alpha Particle X-ray Spectrometer (APXS), the Mars Hand Lens Imager (MALHI), the Chemistry and Microimaging (ChemCam) and the Mast Camera (MastCam) instruments to the measurements made by the two onboard analytical instruments, CheMin and SAM after a sample is acquired, processed, and delivered.
Zimmerman, Marc J.; Savoie, Jennifer G.
2013-01-01
Wastewater discharges to the Assabet River contribute substantial amounts of phosphorus, which support accumulations of nuisance aquatic plants that are most evident in the river’s impounded reaches during the growing season. To restore the Assabet River’s water quality and aesthetics, the U.S. Environmental Protection Agency required the major wastewater-treatment plants in the drainage basin to reduce the amount of phosphorus discharged to the river by 2012. From October 2008 to December 2010, the U.S. Geological Survey, in cooperation with the Massachusetts Department of Environmental Protection and in support of the requirements of the Total Maximum Daily Load for Phosphorus, collected weekly flow-proportional, composite samples for analysis of concentrations of total phosphorus and orthophosphorus upstream and downstream from each of the Assabet River’s two largest impoundments: Hudson and Ben Smith. The purpose of this monitoring effort was to evaluate conditions in the river before enhanced treatment-plant technologies had effected reductions in phosphorus loads, thereby defining baseline conditions for comparison with conditions following the mandated load reductions. The locations of sampling sites with respect to the impoundments enabled examination of the impoundments’ effects on phosphorus sequestration and on the transformation of phosphorus between particulate and dissolved forms. The study evaluated the differences between loads upstream and downstream from the impoundments throughout the sampling period and compared differences during two seasonal periods of relevance to aquatic plants: April 1 through October 31, the growing season, and November 1 through March 31, the nongrowing season, when existing permit limits allowed average monthly wastewater-treatment-plant-effluent concentrations of 0.75 milligram per liter (growing season) or 1.0 milligram per liter (nongrowing season) for total phosphorus. At the four sampling sites during the growing season, median weekly total phosphorus loads ranged from 110 to 190 kilograms (kg) and median weekly orthophosphorus loads ranged from 17 to 41 kg. During the nongrowing season, median weekly total phosphorus loads ranged from 240 to 280 kg and median weekly orthophosphorus loads ranged from 56 to 66 kg. During periods of low and moderate streamflow, estimated loads of total phosphorus upstream from the Hudson impoundment generally exceeded those downstream during the same sampling periods throughout the study; orthophosphorus loads downstream from the impoundment were typically larger than those upstream. When storm runoff substantially increased the streamflow, loads of total phosphorus and orthophosphorus both tended to be larger downstream than upstream. At the Ben Smith impoundment, both total phosphorus and orthophosphorus loads were generally larger downstream than upstream during low and moderate streamflow, but the differences were not as pronounced as they were at the Hudson impoundment. High flows were also associated with substantially larger total phosphorus and orthophosphorus loads downstream than those entering the impoundment from upstream. In comparing periods of growing- and nongrowing-season loads, the same patterns of loads entering and leaving were observed at both impoundments. That is, at the Hudson impoundment, total phosphorus loads entering the impoundment were greater than those leaving it, and orthophosphorus loads leaving the impoundment were greater than those entering it. At the Ben Smith impoundment, both total phosphorus and orthophosphorus loads leaving the impoundment were greater than those entering it. However, the loads were greater during the nongrowing seasons than during the growing seasons, and the net differences between upstream and downstream loads were about the same. The results indicate that some of the particulate fraction of the total phosphorus loads is sequestered in the Hudson impoundment, where particulate phosphorus probably undergoes some physical and biogeochemical transformations to the dissolved form orthophosphorus. The orthophosphorus may be taken up by aquatic plants or transported out of the impoundments. The results for the Ben Smith impoundment are less clear and suggest net export of total phosphorus and orthophosphorus. Differences between results from the two impoundments may be attributable in part to differences in their sizes, morphology, unmonitored tributaries, riparian land use, and processes within the impoundments that have not been quantified for this study.
The effect of short-range spatial variability on soil sampling uncertainty.
Van der Perk, Marcel; de Zorzi, Paolo; Barbizzi, Sabrina; Belli, Maria; Fajgelj, Ales; Sansone, Umberto; Jeran, Zvonka; Jaćimović, Radojko
2008-11-01
This paper aims to quantify the soil sampling uncertainty arising from the short-range spatial variability of elemental concentrations in the topsoils of agricultural, semi-natural, and contaminated environments. For the agricultural site, the relative standard sampling uncertainty ranges between 1% and 5.5%. For the semi-natural area, the sampling uncertainties are 2-4 times larger than in the agricultural area. The contaminated site exhibited significant short-range spatial variability in elemental composition, which resulted in sampling uncertainties of 20-30%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2014-11-01
While previous versions of the International Energy Conservation Code (IECC) have included provisions to improve the air tightness of dwellings, for the first time, the 2012 IECC mandates compliance verification through blower door testing. Simply completing the Air Barrier and Insulation Installation checklist through visual inspection is no longer sufficient; the 2012 IECC mandates a significantly stricter air sealing requirement. In Climate Zones 3 through 8, air leakage may not exceed 3 ACH50, which is a significant reduction from the 2009 IECC requirement of 7 ACH50. This requirement is for all residential buildings, which includes low-rise multifamily dwellings. While thismore » air leakage rate requirement is an important component to achieving an efficient building thermal envelope, currently, the code language doesn't explicitly address differences between single family and multifamily applications. In addition, the 2012 IECC does not provide an option to sample dwellings for larger multifamily buildings, so compliance would have to be verified on every unit. With compliance with the 2012 IECC air leakage requirements on the horizon, several of Building America team Consortium for Advanced Residential Building's (CARB) multifamily builder partners are evaluating how best to comply with this requirement. Builders are not sure whether it is more practical or beneficial to simply pay for guarded testing or to revise their air sealing strategies to improve compartmentalization to comply with code requirements based on unguarded blower door testing. This report summarizes CARB's research that was conducted to assess the feasibility of meeting the 2012 IECC air leakage requirements in three multifamily buildings.« less
Effect of polarity and elongational flow on the morphology and properties of a new nanobiocomposite
NASA Astrophysics Data System (ADS)
Paolo, La Mantia Francesco; Manuela, Ceraulo; Chiara, Mistretta Maria; Fiorenza, Sutera; Laura, Ascione
2015-12-01
Nanobiocomposites are a new class of biodegradable polymer materials that shows very interesting properties and the biodegradability of the matrix. In this work the effect of the polarity of the organomodified montmorillonite and of the elongational flow on the morphology and the rheological and mechanical properties of a new nanobiocomposite having as a matrix a biodegradable copolyester based blend has been investigated. The mechanical properties increase in presence of the nanofiller and this increase is larger and larger with increasing the orientation. Moreover, a brittle-to-ductile transition is observed in the anisotropic sample and this effect is again larger for the nanocomposite. The increase of the interlayer distance is larger for the more polar montmorillonite, even if the two nanocomposites show about the same final interlayer distance.
CFDP Performance over Weather-dependent Ka-band Channel
NASA Technical Reports Server (NTRS)
Sung, I. U.; Gao, Jay L.
2006-01-01
This study presents an analysis of the delay performance of the CCSDS File Delivery Protocol (CFDP) over weather-dependent Ka-band channel. The Ka-band channel condition is determined by the strength of the atmospheric noise temperature, which is weather dependent. Noise temperature data collected from the Deep Space Network (DSN) Madrid site is used to characterize the correlations between good and bad channel states in a two-state Markov model. Specifically, the probability distribution of file delivery latency using the CFDP deferred Negative Acknowledgement (NAK) mode is derived and quantified. Deep space communication scenarios with different file sizes and bit error rates (BERs) are studied and compared. Furthermore, we also examine the sensitivity of our analysis with respect to different data sampling methods. Our analysis shows that while the weather-dependent channel only results in fairly small increases in the average number of CFDP retransmissions required, the maximum number of transmissions required to complete 99 percentile, on the other hand, is significantly larger for the weather-dependent channel due to the significant correlation of poor weather states.
CFDP Performance over Weather-Dependent Ka-Band Channel
NASA Technical Reports Server (NTRS)
U, Sung I.; Gao, Jay L.
2006-01-01
This study presents an analysis of the delay performance of the CCSDS File Delivery Protocol (CFDP) over weather-dependent Ka-band channel. The Ka-band channel condition is determined by the strength of the atmospheric noise temperature, which is weather dependent. Noise temperature data collected from the Deep Space Network (DSN) Madrid site is used to characterize the correlations between good and bad channel states in a two-state Markov model. Specifically, the probability distribution of file delivery latency using the CFDP deferred Negative Acknowledgement (NAK) mode is derived and quantified. Deep space communication scenarios with different file sizes and bit error rates (BERs) are studied and compared. Furthermore, we also examine the sensitivity of our analysis with respect to different data sampling methods. Our analysis shows that while the weather-dependent channel only results in fairly small increases in the average number of CFDP retransmissions required, the maximum number of transmissions required to complete 99 percentile, on the other hand, is significantly larger for the weather-dependent channel due to the significant correlation of poor weather states.
Clinical outcome of double crown-retained implant overdentures with zirconia primary crowns
Buergers, Ralf; Ziebolz, Dirk; Roediger, Matthias
2015-01-01
PURPOSE This retrospective study aims at the evaluation of implant-supported overdentures (IODs) supported by ceramo-galvanic double crowns (CGDCs: zirconia primary crowns + galvano-formed secondary crown). MATERIALS AND METHODS In a private practice, 14 patients were restored with 18 IODs (mandible: 11, maxilla: 7) retained by CGDCs on 4 - 8 implants and annually evaluated for technical and/or biological failures/complications. RESULTS One of the 86 inserted implants failed during the healing period (cumulative survival rate (CSR) implants: 98.8%). During the prosthetic functional period (mean: 5.9 ± 2.2 years), 1 implant demonstrated an abutment fracture (CSR-abutments: 98.2%), and one case of peri-implantitis was detected. All IODs remained in function (CSR-denture: 100%). A total of 15 technical complications required interventions to maintain function (technical complication rate: 0.178 treatments/patients/year). CONCLUSION Considering the small sample size, the use of CGDCs for the attachment of IODs is possible without an increased risk of technical complications. However, for a final evaluation, results from a larger cohort are required. PMID:26330981
Luo, Jin; Zhu, Yongqiang; Guo, Qinghai; Tan, Long; Zhuang, Yaqin; Liu, Mingliang; Zhang, Canhai; Xiang, Wei; Rohn, Joachim
2017-01-05
In this paper, the hydraulic and heat-transfer properties of two sets of artificially fractured granite samples are investigated. First, the morphological information is determined using 3D modelling technology. The area ratio is used to describe the roughness of the fracture surface. Second, the hydraulic properties of fractured granite are tested by exposing samples to different confining pressures and temperatures. The results show that the hydraulic properties of the fractures are affected mainly by the area ratio, with a larger area ratio producing a larger fracture aperture and higher hydraulic conductivity. Both the hydraulic apertureand the hydraulic conductivity decrease with an increase in the confining pressure. Furthermore, the fracture aperture decreases with increasing rock temperature, but the hydraulic conductivity increases owing to a reduction of the viscosity of the fluid flowing through. Finally, the heat-transfer efficiency of the samples under coupled hydro-thermal-mechanical conditions is analysed and discussed.
Luo, Jin; Zhu, Yongqiang; Guo, Qinghai; Tan, Long; Zhuang, Yaqin; Liu, Mingliang; Zhang, Canhai; Xiang, Wei; Rohn, Joachim
2017-01-01
In this paper, the hydraulic and heat-transfer properties of two sets of artificially fractured granite samples are investigated. First, the morphological information is determined using 3D modelling technology. The area ratio is used to describe the roughness of the fracture surface. Second, the hydraulic properties of fractured granite are tested by exposing samples to different confining pressures and temperatures. The results show that the hydraulic properties of the fractures are affected mainly by the area ratio, with a larger area ratio producing a larger fracture aperture and higher hydraulic conductivity. Both the hydraulic apertureand the hydraulic conductivity decrease with an increase in the confining pressure. Furthermore, the fracture aperture decreases with increasing rock temperature, but the hydraulic conductivity increases owing to a reduction of the viscosity of the fluid flowing through. Finally, the heat-transfer efficiency of the samples under coupled hydro-thermal-mechanical conditions is analysed and discussed. PMID:28054594
Merging and Clustering of the Swift BAT AGN Sample
NASA Astrophysics Data System (ADS)
Koss, Michael; Mushotzky, Richard; Veilleux, Sylvain; Winter, Lisa
2010-06-01
We discuss the merger rate, close galaxy environment, and clustering on scales up to an Mpc of the Swift BAT hard X-ray sample of nearby (z<0.05), moderate-luminosity active galactic nuclei (AGNs). We find a higher incidence of galaxies with signs of disruption compared to a matched control sample (18% versus 1%) and of close pairs within 30 kpc (24% versus 1%). We also find a larger fraction with companions compared to normal galaxies and optical emission line selected AGNs at scales up to 250 kpc. We hypothesize that these merging AGNs may not be identified using optical emission line diagnostics because of optical extinction and dilution by star formation. In support of this hypothesis, in merging systems we find a higher hard X-ray to [O III] flux ratio, as well as emission line diagnostics characteristic of composite or star-forming galaxies, and a larger IRAS 60 μm to stellar mass ratio.
The role of language in shaping international migration
Adserà, Alícia; Pytliková, Mariola
2016-01-01
This paper examines the importance of language in international migration from multiple angles by studying the role of linguistic proximity, widely spoken languages, linguistic enclaves and language-based immigration policy requirements. To this aim we collect a unique dataset on immigration flows and stocks in 30 OECD destinations from all world countries over the period 1980–2010, and construct a set of linguistic proximity measures. Migration rates increase with linguistic proximity and with English at destination. Softer linguistic requirements for naturalization and larger linguistic communities at destination encourage more migrants to move. Linguistic proximity matters less when local linguistic network are larger. PMID:27330195
Cryobiopsy: Should This Be Used in Place of Endobronchial Forceps Biopsies?
Rubio, Edmundo R.; le, Susanti R.; Whatley, Ralph E.; Boyd, Michael B.
2013-01-01
Forceps biopsies of airway lesions have variable yields. The yield increases when combining techniques in order to collect more material. With the use of cryotherapy probes (cryobiopsy) larger specimens can be obtained, resulting in an increase in the diagnostic yield. However, the utility and safety of cryobiopsy with all types of lesions, including flat mucosal lesions, is not established. Aims. Demonstrate the utility/safety of cryobiopsy versus forceps biopsy to sample exophytic and flat airway lesions. Settings and Design. Teaching hospital-based retrospective analysis. Methods. Retrospective analysis of patients undergoing cryobiopsies (singly or combined with forceps biopsies) from August 2008 through August 2010. Statistical Analysis. Wilcoxon signed-rank test. Results. The comparative analysis of 22 patients with cryobiopsy and forceps biopsy of the same lesion showed the mean volumes of material obtained with cryobiopsy were significantly larger (0.696 cm3 versus 0.0373 cm3, P = 0.0014). Of 31 cryobiopsies performed, one had minor bleeding. Cryopbiopsy allowed sampling of exophytic and flat lesions that were located centrally or distally. Cryobiopsies were shown to be safe, free of artifact, and provided a diagnostic yield of 96.77%. Conclusions. Cryobiopsy allows safe sampling of exophytic and flat airway lesions, with larger specimens, excellent tissue preservation and high diagnostic accuracy. PMID:24066296
Dynamics of acoustically levitated disk samples.
Xie, W J; Wei, B
2004-10-01
The acoustic levitation force on disk samples and the dynamics of large water drops in a planar standing wave are studied by solving the acoustic scattering problem through incorporating the boundary element method. The dependence of levitation force amplitude on the equivalent radius R of disks deviates seriously from the R3 law predicted by King's theory, and a larger force can be obtained for thin disks. When the disk aspect ratio gamma is larger than a critical value gamma(*) ( approximately 1.9 ) and the disk radius a is smaller than the critical value a(*) (gamma) , the levitation force per unit volume of the sample will increase with the enlargement of the disk. The acoustic levitation force on thin-disk samples ( gamma= gamma(*) ) can be formulated by the shape factor f(gamma,a) when a= a(*) (gamma) . It is found experimentally that a necessary condition of the acoustic field for stable levitation of a large water drop is to adjust the reflector-emitter interval H slightly above the resonant interval H(n) . The simulation shows that the drop is flattened and the central parts of its top and bottom surface become concave with the increase of sound pressure level, which agrees with the experimental observation. The main frequencies of the shape oscillation under different sound pressures are slightly larger than the Rayleigh frequency because of the large shape deformation. The simulated translational frequencies of the vertical vibration under normal gravity condition agree with the theoretical analysis.
Dynamics of acoustically levitated disk samples
NASA Astrophysics Data System (ADS)
Xie, W. J.; Wei, B.
2004-10-01
The acoustic levitation force on disk samples and the dynamics of large water drops in a planar standing wave are studied by solving the acoustic scattering problem through incorporating the boundary element method. The dependence of levitation force amplitude on the equivalent radius R of disks deviates seriously from the R3 law predicted by King’s theory, and a larger force can be obtained for thin disks. When the disk aspect ratio γ is larger than a critical value γ*(≈1.9) and the disk radius a is smaller than the critical value a*(γ) , the levitation force per unit volume of the sample will increase with the enlargement of the disk. The acoustic levitation force on thin-disk samples (γ⩽γ*) can be formulated by the shape factor f(γ,a) when a⩽a*(γ) . It is found experimentally that a necessary condition of the acoustic field for stable levitation of a large water drop is to adjust the reflector-emitter interval H slightly above the resonant interval Hn . The simulation shows that the drop is flattened and the central parts of its top and bottom surface become concave with the increase of sound pressure level, which agrees with the experimental observation. The main frequencies of the shape oscillation under different sound pressures are slightly larger than the Rayleigh frequency because of the large shape deformation. The simulated translational frequencies of the vertical vibration under normal gravity condition agree with the theoretical analysis.
Proposed BioRepository platform solution for the ALS research community.
Sherman, Alex; Bowser, Robert; Grasso, Daniela; Power, Breen; Milligan, Carol; Jaffa, Matthew; Cudkowicz, Merit
2011-01-01
ALS is a rare disorder whose cause and pathogenesis is largely unknown ( 1 ). There is a recognized need to develop biomarkers for ALS to better understand the disease, expedite diagnosis and to facilitate therapy development. Collaboration is essential to obtain a sufficient number of samples to allow statistically meaningful studies. The availability of high quality biological specimens for research purposes requires the development of standardized methods for collection, long-term storage, retrieval and distribution of specimens. The value of biological samples to scientists and clinicians correlates with the completeness and relevance of phenotypical and clinical information associated with the samples ( 2 , 3 ). While developing a secure Web-based system to manage an inventory of multi-site BioRepositories, algorithms were implemented to facilitate ad hoc parametric searches across heterogeneous data sources that contain data from clinical trials and research studies. A flexible schema for a barcode label was introduced to allow association of samples to these data. The ALSBank™ BioRepository platform solution for managing biological samples and associated data is currently deployed by the Northeast ALS Consortium (NEALS). The NEALS Consortium and the Massachusetts General Hospital (MGH) Neurology Clinical Trials Unit (NCTU) support a network of multiple BioBanks, thus allowing researchers to take advantage of a larger specimen collection than they might have at an individual institution. Standard operating procedures are utilized at all collection sites to promote common practices for biological sample integrity, quality control and associated clinical data. Utilizing this platform, we have created one of the largest virtual collections of ALS-related specimens available to investigators studying ALS.
Steigen, Terje K; Claudio, Cheryl; Abbott, David; Schulzer, Michael; Burton, Jeff; Tymchak, Wayne; Buller, Christopher E; John Mancini, G B
2008-06-01
To assess reproducibility of core laboratory performance and impact on sample size calculations. Little information exists about overall reproducibility of core laboratories in contradistinction to performance of individual technicians. Also, qualitative parameters are being adjudicated increasingly as either primary or secondary end-points. The comparative impact of using diverse indexes on sample sizes has not been previously reported. We compared initial and repeat assessments of five quantitative parameters [e.g., minimum lumen diameter (MLD), ejection fraction (EF), etc.] and six qualitative parameters [e.g., TIMI myocardial perfusion grade (TMPG) or thrombus grade (TTG), etc.], as performed by differing technicians and separated by a year or more. Sample sizes were calculated from these results. TMPG and TTG were also adjudicated by a second core laboratory. MLD and EF were the most reproducible, yielding the smallest sample size calculations, whereas percent diameter stenosis and centerline wall motion require substantially larger trials. Of the qualitative parameters, all except TIMI flow grade gave reproducibility characteristics yielding sample sizes of many 100's of patients. Reproducibility of TMPG and TTG was only moderately good both within and between core laboratories, underscoring an intrinsic difficulty in assessing these. Core laboratories can be shown to provide reproducibility performance that is comparable to performance commonly ascribed to individual technicians. The differences in reproducibility yield huge differences in sample size when comparing quantitative and qualitative parameters. TMPG and TTG are intrinsically difficult to assess and conclusions based on these parameters should arise only from very large trials.
High-Field Liquid-State Dynamic Nuclear Polarization in Microliter Samples.
Yoon, Dongyoung; Dimitriadis, Alexandros I; Soundararajan, Murari; Caspers, Christian; Genoud, Jeremy; Alberti, Stefano; de Rijk, Emile; Ansermet, Jean-Philippe
2018-05-01
Nuclear hyperpolarization in the liquid state by dynamic nuclear polarization (DNP) has been of great interest because of its potential use in NMR spectroscopy of small samples of biological and chemical compounds in aqueous media. Liquid state DNP generally requires microwave resonators in order to generate an alternating magnetic field strong enough to saturate electron spins in the solution. As a consequence, the sample size is limited to dimensions of the order of the wavelength, and this restricts the sample volume to less than 100 nL for DNP at 9 T (∼260 GHz). We show here a new approach that overcomes this sample size limitation. Large saturation of electron spins was obtained with a high-power (∼150 W) gyrotron without microwave resonators. Since high power microwaves can cause serious dielectric heating in polar solutions, we designed a planar probe which effectively alleviates dielectric heating. A thin liquid sample of 100 μm of thickness is placed on a block of high thermal conductivity aluminum nitride, with a gold coating that serves both as a ground plane and as a heat sink. A meander or a coil were used for NMR. We performed 1 H DNP at 9.2 T (∼260 GHz) and at room temperature with 10 μL of water, a volume that is more than 100× larger than reported so far. The 1 H NMR signal is enhanced by a factor of about -10 with 70 W of microwave power. We also demonstrated the liquid state of 31 P DNP in fluorobenzene containing triphenylphosphine and obtained an enhancement of ∼200.
Márquez, Samuel; Lawson, William; Mowbray, Kenneth; Delman, Bradley N; Laitman, Jeffrey T
2015-06-01
The interaction of nasal morphology and climatic conditions has resulted in diverse hard- and soft-tissue configurations across human population groups. While the processes of skull pneumatization are not fully understood, the invasions of the paranasal sinuses [PNS] into the cranium have contributed to assorted morphologies. Human migratory patterns and the strong association with climatic variables through time and space may explain this diversity. This study examined four multiregional populations of which two are from Egypt but of widely divergent eras. Three Egyptian mummies [EG-M] from the middle kingdom were CT scanned providing a unique opportunity to investigate the status of PNS anatomy within a time frame from 1567 BCE to 600 CE and compare it to a contemporary Egyptian [EG] (n = 12) population. Dry skulls of Inuit [IT] (n = 10) and East African [EA] (n = 8) provide out-group comparisons, as one group represents an isolated geographic environment far different from that of Egypt and the other group inhabiting distinct environmental conditions albeit located within the same continent. Results showed EG-M and EG frontal sinus volumes were diminutive in size with no statistically significant difference between them. Maxillary sinus size values of EG-M and EG clustered together while IT and EA significantly differed from each other (P = 0.002). The multiregional groups exhibited population specific morphologies in their PNS anatomy. Ecogeographic localities revealed anatomical differences among IT and EA, while the potential time span of about 3,500 years produced only a negligible difference between the Egyptian groups. The small sample sizes incorporated into this research requires confirmation of the results by analyses of larger samples from each geographic region and with the integration of a larger group of Egyptian mummified remains. © 2015 Wiley Periodicals, Inc.
Thurtle, Natalie; Abouchedid, Rachelle; Archer, John R H; Ho, James; Yamamoto, Takahiro; Dargan, Paul I; Wood, David M
2017-03-01
Electronic nicotine delivery systems (ENDS, often called e-cigarettes) are nicotine delivery devices that heat nicotine into vapour that is inhaled, a process called 'vaping'. Use eclipsed nicotine-replacement therapy (NRT) in 2014 but ENDS role in smoking cessation remains controversial. Safety has not been proven and there have been reports to US poison centres regarding potential ENDS-related nicotine toxicity. A further concern is use of ENDS to vape recreational drugs, but there is limited data to substantiate this. The aim of this study was to report on ENDS use to vape recreational drugs in patrons of a South London nightclub where high prevalence of recreational drug use has previously been shown. A convenience sample of 101 participants was surveyed in March 2015 as part of a larger survey on drug use. Individuals were asked if they used ENDS to vape nicotine and/or other substances (and if so which substances). Ninety (89.1 %) of respondents were male with median age of 28 years (IQR 23-34). Eighty (79.2 %) currently smoked cigarettes; 20 (19.8 %) currently used ENDS for nicotine. Six (5.9 %) reported using ENDS to take other substances: 2 for 'liquid cannabis' and 4 did not elaborate on the substance(s) used. Of these 6, 3 were using ENDS to vape nicotine and 3 had never used them for nicotine. 5.9 % of individuals in this sample reported using ENDS to vape substances other than nicotine. Further work is required in larger populations to determine how common this is, evaluate which agents are being vaped and to inform appropriate public education.
Max-Moerbeck, W.; Hovatta, T.; Richards, J. L.; ...
2014-09-22
In order to determine the location of the gamma-ray emission site in blazars, we investigate the time-domain relationship between their radio and gamma-ray emission. Light-curves for the brightest detected blazars from the first 3 years of the mission of the Fermi Gamma-ray Space Telescope are cross-correlated with 4 years of 15GHz observations from the OVRO 40-m monitoring program. The large sample and long light-curve duration enable us to carry out a statistically robust analysis of the significance of the cross-correlations, which is investigated using Monte Carlo simulations including the uneven sampling and noise properties of the light-curves. Modeling the light-curvesmore » as red noise processes with power-law power spectral densities, we find that only one of 41 sources with high quality data in both bands shows correlations with significance larger than 3σ (AO0235+164), with only two more larger than even 2.25σ (PKS 1502+106 and B2 2308+34). Additionally, we find correlated variability in Mrk 421 when including a strong flare that occurred in July-September 2012. These results demonstrate very clearly the difficulty of measuring statistically robust multiwavelength correlations and the care needed when comparing light-curves even when many years of data are used. This should be a caution. In all four sources the radio variations lag the gamma-ray variations, suggesting that the gamma-ray emission originates upstream of the radio emission. Continuous simultaneous monitoring over a longer time period is required to obtain high significance levels in cross-correlations between gamma-ray and radio variability in most blazars.« less
2013-01-01
Background Relative validity (RV), a ratio of ANOVA F-statistics, is often used to compare the validity of patient-reported outcome (PRO) measures. We used the bootstrap to establish the statistical significance of the RV and to identify key factors affecting its significance. Methods Based on responses from 453 chronic kidney disease (CKD) patients to 16 CKD-specific and generic PRO measures, RVs were computed to determine how well each measure discriminated across clinically-defined groups of patients compared to the most discriminating (reference) measure. Statistical significance of RV was quantified by the 95% bootstrap confidence interval. Simulations examined the effects of sample size, denominator F-statistic, correlation between comparator and reference measures, and number of bootstrap replicates. Results The statistical significance of the RV increased as the magnitude of denominator F-statistic increased or as the correlation between comparator and reference measures increased. A denominator F-statistic of 57 conveyed sufficient power (80%) to detect an RV of 0.6 for two measures correlated at r = 0.7. Larger denominator F-statistics or higher correlations provided greater power. Larger sample size with a fixed denominator F-statistic or more bootstrap replicates (beyond 500) had minimal impact. Conclusions The bootstrap is valuable for establishing the statistical significance of RV estimates. A reasonably large denominator F-statistic (F > 57) is required for adequate power when using the RV to compare the validity of measures with small or moderate correlations (r < 0.7). Substantially greater power can be achieved when comparing measures of a very high correlation (r > 0.9). PMID:23721463
Utility of Inferential Norming with Smaller Sample Sizes
ERIC Educational Resources Information Center
Zhu, Jianjun; Chen, Hsin-Yi
2011-01-01
We examined the utility of inferential norming using small samples drawn from the larger "Wechsler Intelligence Scales for Children-Fourth Edition" (WISC-IV) standardization data set. The quality of the norms was estimated with multiple indexes such as polynomial curve fit, percentage of cases receiving the same score, average absolute…
Migration monitoring with automated technology
Rhonda L. Millikin
2005-01-01
Automated technology can supplement ground-based methods of migration monitoring by providing: (1) unbiased and automated sampling; (2) independent validation of current methods; (3) a larger sample area for landscape-level analysis of habitat selection for stopover, and (4) an opportunity to study flight behavior. In particular, radar-acoustic sensor fusion can...
NASA Astrophysics Data System (ADS)
Alemany, Kristina
Electric propulsion has recently become a viable technology for spacecraft, enabling shorter flight times, fewer required planetary gravity assists, larger payloads, and/or smaller launch vehicles. With the maturation of this technology, however, comes a new set of challenges in the area of trajectory design. Because low-thrust trajectory optimization has historically required long run-times and significant user-manipulation, mission design has relied on expert-based knowledge for selecting departure and arrival dates, times of flight, and/or target bodies and gravitational swing-bys. These choices are generally based on known configurations that have worked well in previous analyses or simply on trial and error. At the conceptual design level, however, the ability to explore the full extent of the design space is imperative to locating the best solutions in terms of mass and/or flight times. Beginning in 2005, the Global Trajectory Optimization Competition posed a series of difficult mission design problems, all requiring low-thrust propulsion and visiting one or more asteroids. These problems all had large ranges on the continuous variables---launch date, time of flight, and asteroid stay times (when applicable)---as well as being characterized by millions or even billions of possible asteroid sequences. Even with recent advances in low-thrust trajectory optimization, full enumeration of these problems was not possible within the stringent time limits of the competition. This investigation develops a systematic methodology for determining a broad suite of good solutions to the combinatorial, low-thrust, asteroid tour problem. The target application is for conceptual design, where broad exploration of the design space is critical, with the goal being to rapidly identify a reasonable number of promising solutions for future analysis. The proposed methodology has two steps. The first step applies a three-level heuristic sequence developed from the physics of the problem, which allows for efficient pruning of the design space. The second phase applies a global optimization scheme to locate a broad suite of good solutions to the reduced problem. The global optimization scheme developed combines a novel branch-and-bound algorithm with a genetic algorithm and an industry-standard low-thrust trajectory optimization program to solve for the following design variables: asteroid sequence, launch date, times of flight, and asteroid stay times. The methodology is developed based on a small sample problem, which is enumerated and solved so that all possible discretized solutions are known. The methodology is then validated by applying it to a larger intermediate sample problem, which also has a known solution. Next, the methodology is applied to several larger combinatorial asteroid rendezvous problems, using previously identified good solutions as validation benchmarks. These problems include the 2nd and 3rd Global Trajectory Optimization Competition problems. The methodology is shown to be capable of achieving a reduction in the number of asteroid sequences of 6-7 orders of magnitude, in terms of the number of sequences that require low-thrust optimization as compared to the number of sequences in the original problem. More than 70% of the previously known good solutions are identified, along with several new solutions that were not previously reported by any of the competitors. Overall, the methodology developed in this investigation provides an organized search technique for the low-thrust mission design of asteroid rendezvous problems.
Two-Photon Excitation in Biological Material for Conventional and Long Working-Distance Objectives.
NASA Astrophysics Data System (ADS)
Keeler, W. J.; McGhee, P.
2000-03-01
The application of laser two-photon excitation or nonlinear second-harmonic generation to imaging, spectroscopy, and light activated medical therapies, is an expanding field of research. When small feature sizes such as cells and their components are to be studied, high numerical aperture (NA) lenses are required to obtain the necessary lateral and axial resolutions. If one wishes to increase the depth of sample penetration, factors such as scattering and absorption quickly degrade the quality of the focused beam. The problem is further exacerbated by the short working distance of conventional high NA microscope objectives if they are used for light delivery and pickup. These lenses and their accompanying eyepieces, are designed to produce an exit pupil that can be accomodated by the human eye. Such a design will underfil detectors such as large CCD arrays. To simultaneously increase the working distance at the sample and the system exit pupil, larger scale objectives can be used. We will report the results of two-photon excitation and fluorescence investigations of several feature sizes as a function of penetration depth in homogeneous media and tissue samples, for conventional and long working distance objectives. The possible implications of these results to imaging and therapeutic dose delivery will also be presented.
Optimal selection of epitopes for TXP-immunoaffinity mass spectrometry.
Planatscher, Hannes; Supper, Jochen; Poetz, Oliver; Stoll, Dieter; Joos, Thomas; Templin, Markus F; Zell, Andreas
2010-06-25
Mass spectrometry (MS) based protein profiling has become one of the key technologies in biomedical research and biomarker discovery. One bottleneck in MS-based protein analysis is sample preparation and an efficient fractionation step to reduce the complexity of the biological samples, which are too complex to be analyzed directly with MS. Sample preparation strategies that reduce the complexity of tryptic digests by using immunoaffinity based methods have shown to lead to a substantial increase in throughput and sensitivity in the proteomic mass spectrometry approach. The limitation of using such immunoaffinity-based approaches is the availability of the appropriate peptide specific capture antibodies. Recent developments in these approaches, where subsets of peptides with short identical terminal sequences can be enriched using antibodies directed against short terminal epitopes, promise a significant gain in efficiency. We show that the minimal set of terminal epitopes for the coverage of a target protein list can be found by the formulation as a set cover problem, preceded by a filtering pipeline for the exclusion of peptides and target epitopes with undesirable properties. For small datasets (a few hundred proteins) it is possible to solve the problem to optimality with moderate computational effort using commercial or free solvers. Larger datasets, like full proteomes require the use of heuristics.
Theoretical constraints on dynamic pulverization of fault zone rocks
NASA Astrophysics Data System (ADS)
Xu, Shiqing; Ben-Zion, Yehuda
2017-04-01
We discuss dynamic rupture results aiming to elucidate the generation mechanism of pulverized fault zone rocks (PFZR) observed in 100-200 m wide belts distributed asymmetrically across major strike-slip faults separating different crustal blocks. Properties of subshear and supershear ruptures are considered using analytical results of Linear Elastic Fracture Mechanics and numerical simulations of Mode-II ruptures along faults between similar or dissimilar solids. The dynamic fields of bimaterial subshear ruptures are expected to produce off-fault damage primarily on the stiff side of the fault, with tensile cracks having no preferred orientation, in agreement with field observations. Subshear ruptures in a homogeneous solid are expected to produce off-fault damage with high-angle tensile cracks on the extensional side of the fault, while supershear ruptures between similar or dissimilar solids are likely to produce off-fault damage on both sides of the fault with preferred tensile crack orientations. One or more of these features are not consistent with properties of natural samples of PFZR. At a distance of about 100 m from the fault, subshear and supershear ruptures without stress singularities produce strain rates up to 1 s-1. This is less than required for rock pulverization in laboratory experiments with centimetre-scale intact rock samples, but may be sufficient for pulverizing larger samples with pre-existing damage.
Saripella, Kalyan K; Mallipeddi, Rama; Neau, Steven H
2014-11-20
Polyplasdone of different particle size was used to study the sorption, desorption, and distribution of water, and to seek evidence that larger particles can internalize water. The three samples were Polyplasdone® XL, XL-10, and INF-10. Moisture sorption and desorption isotherms at 25 °C at 5% intervals from 0 to 95% relative humidity (RH) were generated by dynamic vapor sorption analysis. The three products provided similar data, judged to be Type III with a small hysteresis that appears when RH is below 65%. An absent rounded knee in the sorption curve suggests that multilayers form before the monolayer is completed. The hysteresis indicates that internally absorbed moisture is trapped as the water is desorbed and the polymer sample shrinks, thus requiring a lower level of RH to continue desorption. The fit of the Guggenheim-Anderson-de Boer (GAB) and the Young and Nelson equations was accomplished in the data analysis. The W(m), C(G), and K values from GAB analysis are similar across the three samples, revealing 0.962 water molecules per repeating unit in the monolayer. A small amount of absorbed water is identified, but this is consistent across the three particle sizes. Copyright © 2014 Elsevier B.V. All rights reserved.
[Evaluation of the quality of Anales Españoles de Pediatría versus Medicina Clínica].
Bonillo Perales, A
2002-08-01
To compare the scientific methodology and quality of articles published in Anales Españoles de Pediatría and Medicina Clínica. A stratified and randomized selection of 40 original articles published in 2001 in Anales Españoles de Pediatría and Medicina Clínica was made. Methodological errors in the critical analysis of original articles (21 items), epidemiological design, sample size, statistical complexity and levels of scientific evidence in both journals were compared using the chi-squared and/or Student's t-test. No differences were found between Anales Españoles de Pediatría and Medicina Clínica in the critical evaluation of original articles (p > 0.2). In original articles published in Anales Españoles de Pediatría, the designs were of lower scientific evidence (a lower proportion of clinical trials, cohort and case-control studies) (17.5 vs 42.5 %, p 0.05), sample sizes were smaller (p 0.003) and there was less statistical complexity in the results section (p 0.03). To improve the scientific quality of Anales Españoles de Pediatría, improved study designs, larger sample sizes and greater statistical complexity are required in its articles.
Enhanced Superconductivity in Sr2CuO(4-x)
NASA Astrophysics Data System (ADS)
Geballe, Theodore
2010-03-01
The cause of the enhanced Tc of Sr2CuO(4-x) which is almost a factor of two larger than optimally doped La 214 superconductors has remained a challenge since its discovery by Hiroi et al [1]. Lack of progress is due to the difficulties in synthesis which require a strong oxidizing agent at hight pressure and temperature. The resulting superconductor sample is multiphase leading to some ambiguity in interpretation. An unjustified suggestion that the results are spurious is negated by recent experiments in which similar behavior is found but with samples prepared using a different synthesis [2]. This has led us to reconsider the available data in the literature [3]. The experimental value of x = ˜ 0.6 suggests that the superconductivity originates in very heavily overdoped CuO2 layers containing ordered oxygen vacancies. The data support the idea that there is an exciting region of the cuprate phase diagram waiting to be understood but better samples are needed before the possible pairing mechanisms we can think of, or others yet to be determined, can be investigated. [4pt] [1] Z. Hiroi e,t al., Nature 364 (1993) 315 [0pt] [2] Q..Q. Liu et al., Phys Rev B 74 (2006) 100506 [0pt] [3] T.H. Geballe and M. Marezio Physica C 469 (2009) 680
Thaitrong, Numrin; Kim, Hanyoup; Renzi, Ronald F; Bartsch, Michael S; Meagher, Robert J; Patel, Kamlesh D
2012-12-01
We have developed an automated quality control (QC) platform for next-generation sequencing (NGS) library characterization by integrating a droplet-based digital microfluidic (DMF) system with a capillary-based reagent delivery unit and a quantitative CE module. Using an in-plane capillary-DMF interface, a prepared sample droplet was actuated into position between the ground electrode and the inlet of the separation capillary to complete the circuit for an electrokinetic injection. Using a DNA ladder as an internal standard, the CE module with a compact LIF detector was capable of detecting dsDNA in the range of 5-100 pg/μL, suitable for the amount of DNA required by the Illumina Genome Analyzer sequencing platform. This DMF-CE platform consumes tenfold less sample volume than the current Agilent BioAnalyzer QC technique, preserving precious sample while providing necessary sensitivity and accuracy for optimal sequencing performance. The ability of this microfluidic system to validate NGS library preparation was demonstrated by examining the effects of limited-cycle PCR amplification on the size distribution and the yield of Illumina-compatible libraries, demonstrating that as few as ten cycles of PCR bias the size distribution of the library toward undesirable larger fragments. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Optimized Geometry for Superconducting Sensing Coils
NASA Technical Reports Server (NTRS)
Eom, Byeong Ho; Pananen, Konstantin; Hahn, Inseob
2008-01-01
An optimized geometry has been proposed for superconducting sensing coils that are used in conjunction with superconducting quantum interference devices (SQUIDs) in magnetic resonance imaging (MRI), magnetoencephalography (MEG), and related applications in which magnetic fields of small dipoles are detected. In designing a coil of this type, as in designing other sensing coils, one seeks to maximize the sensitivity of the detector of which the coil is a part, subject to geometric constraints arising from the proximity of other required equipment. In MRI or MEG, the main benefit of maximizing the sensitivity would be to enable minimization of measurement time. In general, to maximize the sensitivity of a detector based on a sensing coil coupled with a SQUID sensor, it is necessary to maximize the magnetic flux enclosed by the sensing coil while minimizing the self-inductance of this coil. Simply making the coil larger may increase its self-inductance and does not necessarily increase sensitivity because it also effectively increases the distance from the sample that contains the source of the signal that one seeks to detect. Additional constraints on the size and shape of the coil and on the distance from the sample arise from the fact that the sample is at room temperature but the coil and the SQUID sensor must be enclosed within a cryogenic shield to maintain superconductivity.
Sullivan, Kathleen E; Fleming, Greg; Terrell, Scott; Smith, Dustin; Ridgley, Frank; Valdes, Eduardo V
2014-12-01
Recent issues surrounding captive amphibians are often nutritionally related problems, such as hypovitaminosis A. Although supplementation of frogs with vitamin A is a topic of investigation, the underlying issue is understanding vitamin A metabolism in amphibian species. To develop a range of "normal" vitamin A concentrations for captive amphibians, baseline vitamin A concentrations must be established in wild amphibian species. In this study, two species, Cuban tree frogs (Osteopilus septentrionalis; n = 59) and marine toads (Rhinella marina; n = 20) were collected from the wild as part of an invasive species control program at Zoo Miami, Miami, Florida. Serum, liver, and whole body samples were analyzed for vitamin A content. The Cuban tree frogs showed higher concentrations on average of vitamin A in serum (82.8 ppb), liver (248.3 IU/g), and whole body (5474.7 IU/kg) samples compared with marine toads (60.1 ppb; 105.3 IU/g; 940.7 IU/kg, respectively), but differences were not significant (P = 0.22). What can be considered "normal" values of vitamin A concentrations across different amphibian species requires further investigation. Although all amphibians collected in this study appeared healthy, a larger sample size of animals, with known health histories and diets, may provide stronger evidence of normal expectations.
Fayaz, Shima; Fard-Esfahani, Pezhman; Torbati, Peyman Mohammadi
2014-01-01
Recently, mutations in the genes involved in cell cycle control, including CHEK2, are being considered as etiological factors in different kinds of cancers. The CHEK2 protein plays an important role in protecting damaged DNA from entering mitosis. In this study the potential effects of two common mutations IVS2+1G?A and Ile157Thr of CHEK2 gene in differentiated thyroid carcinoma (DTC) were evaluated. A total of 100 patients admitted to the Research Institute for Nuclear Medicine were diagnosed with DTC based on pathology reports of surgery samples. An additional 100 people were selected as a control group with no cancer history. PCR-HRM (high resolution melting) analysis was performed to deal with each of mutations in all case and control samples separately. During the analysis of IVS2+1G?A and Ile157Thr mutations of CHEK2 gene in the case and control groups, all the samples were identified as wild homozygote type. The finding suggests that IVS2+1G?A and Ile157Thr mutations of CHEK2 gene do not constitute a risk factor for DTC in the Iranian population. However, further studies with a larger population are required to confirm the outcome.
Some Impacts of Risk-Centric Certification Requirements for UAS
NASA Technical Reports Server (NTRS)
Neogi, Natasha A. (Inventor); Hayhurst, Kelly J.; Maddalon, Jeffrey M.; Verstynen, Harry A.
2016-01-01
This paper discusses results from a recent study that investigates certification requirements for an unmanned rotorcraft performing agricultural application operations. The process of determining appropriate requirements using a risk-centric approach revealed a number of challenges that could impact larger UAS standardization efforts. Fundamental challenges include selecting the correct level of abstraction for requirements to permit design flexibility, transforming human-centric operational requirements to aircraft airworthiness requirements, and assessing all hazards associated with the operation.
Ryan, Herbert; van Bentum, Jan; Maly, Thorsten
2017-04-01
In recent years high-field Dynamic Nuclear Polarization (DNP) enhanced NMR spectroscopy has gained significant interest. In high-field DNP-NMR experiments (⩾400MHz 1 H NMR, ⩾9.4T) often a stand-alone gyrotron is used to generate high microwave/THz power to produce sufficiently high microwave induced B 1e fields at the position of the NMR sample. These devices typically require a second, stand-alone superconducting magnet to operate. Here we present the design and realization of a ferroshim insert, to create two iso-centers inside a commercially available wide-bore NMR magnet. This work is part of a larger project to integrate a gyrotron into NMR magnets, effectively eliminating the need for a second, stand-alone superconducting magnet. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tralshawala, Nilesh; Howard, Don; Knight, Bryon
2008-02-28
In conventional infrared thermography, determination of thermal diffusivity requires thickness information. Recently GE has been experimenting with the use of lateral heat flow to determine thermal diffusivity without thickness information. This work builds on previous work at NASA Langley and Wayne State University but we incorporate thermal time of flight (tof) analysis rather than curve fitting to obtain quantitative information. We have developed appropriate theoretical models and a tof based data analysis framework to experimentally determine all components of thermal diffusivity from the time-temperature measurements. Initial validation was carried out using finite difference simulations. Experimental validation was done using anisotropicmore » carbon fiber reinforced polymer (CFRP) composites. We found that in the CFRP samples used, the in-plane component of diffusivity is about eight times larger than the through-thickness component.« less
Spittal, Matthew J; Carlin, John B; Currier, Dianne; Downes, Marnie; English, Dallas R; Gordon, Ian; Pirkis, Jane; Gurrin, Lyle
2016-10-31
The Australian Longitudinal Study on Male Health (Ten to Men) used a complex sampling scheme to identify potential participants for the baseline survey. This raises important questions about when and how to adjust for the sampling design when analyzing data from the baseline survey. We describe the sampling scheme used in Ten to Men focusing on four important elements: stratification, multi-stage sampling, clustering and sample weights. We discuss how these elements fit together when using baseline data to estimate a population parameter (e.g., population mean or prevalence) or to estimate the association between an exposure and an outcome (e.g., an odds ratio). We illustrate this with examples using a continuous outcome (weight in kilograms) and a binary outcome (smoking status). Estimates of a population mean or disease prevalence using Ten to Men baseline data are influenced by the extent to which the sampling design is addressed in an analysis. Estimates of mean weight and smoking prevalence are larger in unweighted analyses than weighted analyses (e.g., mean = 83.9 kg vs. 81.4 kg; prevalence = 18.0 % vs. 16.7 %, for unweighted and weighted analyses respectively) and the standard error of the mean is 1.03 times larger in an analysis that acknowledges the hierarchical (clustered) structure of the data compared with one that does not. For smoking prevalence, the corresponding standard error is 1.07 times larger. Measures of association (mean group differences, odds ratios) are generally similar in unweighted or weighted analyses and whether or not adjustment is made for clustering. The extent to which the Ten to Men sampling design is accounted for in any analysis of the baseline data will depend on the research question. When the goals of the analysis are to estimate the prevalence of a disease or risk factor in the population or the magnitude of a population-level exposure-outcome association, our advice is to adopt an analysis that respects the sampling design.
New Era of Scientific Ocean Drilling
NASA Astrophysics Data System (ADS)
Eguchi, N.; Toczko, S.; Sanada, Y.; Igarashi, C.; Kubo, Y.; Maeda, L.; Sawada, I.; Takase, K.; Kyo, N.
2014-12-01
The D/V Chikyu, committed to scientific ocean drilling since 2007, has completed thirteen IODP expeditions, and Chikyu's enhanced drilling technology gives us the means to reach deep targets, enhanced well logging, deep water riserless drilling, and state of the art laboratory. Chikyu recovered core samples from 2466 meters below sea floor (mbsf) in IODP Exp. 337, and drilled to 3058.5 mbsf in IODP Exp. 348, but these are still not the limit of Chikyu's capability. As deep as these depths are, they are just halfway to the 5200 mbsf plate boundary target for the NanTroSEIZE deep riser borehole. There are several active IODP proposals in the pipeline. Each has scientific targets requiring several thousand meters of penetration below the sea floor. Riser technology is the only way to collect samples and data from that depth. Well logging has been enhanced with the adoption of riser drilling, especially for logging-while-drilling (LWD). LWD has several advantages over wireline logging, and provides more opportunities for continuous measurements even in unstable boreholes. Because of the larger diameter of riser pipes and enhanced borehole stability, Chikyu can use several state-of-the-art downhole tools, e.g. fracture tester, fluid sampling tool, wider borehole imaging, and the latest sonic tools. These new technologies and tools can potentially expand the envelope of scientific ocean drilling. Chikyu gives us access to ultra-deep water riserless drilling. IODP Exp. 343/343T investigating the March 2011 Tohoku Oki Earthquake, explored the toe of the landward slope of the Japan Trench. This expedition reached the plate boundary fault target at more than 800 mbsf in water depths over 6900 m for logging-while-drilling, coring, and observatory installation. This deep-water drilling capability also expands the scientific ocean drilling envelope and provides access to previously unreachable targets. On top of these operational capabilities, Chikyu's onboard laboratory is equipped with state-of-the-art instruments to analyze all science samples. X-ray CT creates non-destructive 3D images of core samples providing high resolution structural detail. The microbiology laboratory offers clean and contamination-free work environments required for microbiological samples.
The Long-term Natural History of Geographic Atrophy from Age-Related Macular Degeneration
Sunness, Janet S.; Margalit, Eyal; Srikumaran, Divya; Applegate, Carol A.; Tian, Yan; Perry, Daniel; Hawkins, Barbara S.; Bressler, Neil M.
2008-01-01
Purpose To report the enlargement rate of geographic atrophy (GA) over time, its relationship to size of atrophy at baseline and to prior enlargement rate, and the implications for designing future treatment trials for GA. Design Prospective natural history study of GA resulting from age-related macular degeneration. Participants Two hundred twelve eyes of 131 patients were included in the analysis. Methods Annual follow-up included stereo color fundus photographs. The areas of GA were identified and measured, and the rate of enlargement of the atrophy was assessed. Sample sizes for clinical trials using systemic treatment and uniocular treatment were determined. Main Outcome Measure Rate of enlargement of the atrophy. Results The median overall enlargement rate was 2.1 mm2/year (mean, 2.6 mm2/year). Eyes with larger areas of atrophy at baseline tended to have larger enlargement rates, but knowledge of prior rates of enlargement was the most significant factor in predicting subsequent enlargement rates. There was high concordance between the enlargement rates in the 2 eyes of patients with bilateral GA (correlation coefficient, 0.76). To detect a 25% reduction in enlargement rate for a systemic treatment (α, 0.05; power, 0.80; losses to follow-up, 15%), 153 patients each in a control and treatment group would be required for a trial with a 2-year follow-up period for each patient. For a uniocular treatment, 38 patients with bilateral GA would be required, with the untreated eye serving as a control for the treated eye. Conclusions Treatment trials for GA with an outcome variable of change in enlargement rate are feasible. PMID:17270676
Relation between urbanization and water quality of streams in the Austin area, Texas
Veenhuis, J.E.; Slade, R.M.
1990-01-01
The ratio of the number of samples with detectable concentrations to the total number of samples analyzed for 18 inorganic trace elements and the concentrations of many of these minor constituents increased with increasing development classifications. Twenty-two of the 42 synthetic organic compounds for which samples were analyzed were detected in one or more samples. The compounds were detected more frequently and in larger concentrations at the sites with more urban classifications.
NASA Astrophysics Data System (ADS)
Aneja, V. P.; Rumsey, I. C.; Lonneman, W. A.
2011-12-01
The emission of NMVOCs from swine concentrated animal feeding operations (CAFOs) in North Carolina is of concern, due to their contribution to odor. In addition, of the 188 listed hazardous air pollutants (HAPs), 162 are classified as NMVOCs. NMVOCs emissions were determined over four seasonal sampling periods from an anaerobic lagoon and barn at a swine CAFO in North Carolina. Sampling was conducted during the period June 2007 through April 2008. Air samples were collected using SUMMA and fused-silca lined (FSL) canisters and were analyzed for NMVOCs using a gas chromatography flame ionization detection (GC-FID) system. Nine to eleven canister samples were collected from both the anaerobic lagoon and the barn over a ~1 week sampling period, with samples collected on a minimum of four different days. Measurements of meteorological and physiochemical parameters were made during the lagoon and barn sampling. Six NMVOCs (acetone, acetaldehyde, ethanol, 2-ethyl-1-hexanol, methanol and methyl ethyl ketone (MEK)) were identified in lagoon samples, that were classified as having significantly larger emissions in comparison to other NMVOCs. Overall average lagoon fluxes of these NMVOCs ranged from 0.18 ug m-2 min-1 for 2-ethyl-1-hexanol to 2.11 ug m-2 min-1 for acetone. In barn samples there were also six NMVOCs (acetaldehyde, acetone, 2,3-butanedione, ethanol, methanol and 4-methylphenol) that were classified as having significantly larger emissions in comparison to other compounds. Overall average concentrations for these six compounds ranged from 2.87 ppb for 4-methylphenol to 16.12 ppb for ethanol. The overall average normalized emissions ranged from 0.10 g day-1 AU-1 (AU = one animal unit, representing 500 kg of live animal weight) for acetaldehyde to 0.45 g day-1 AU-1 for ethanol. Eight odorous compounds were identified in lagoon and barn samples. These were 2,3-butanedione, decanal, ethylbenzene, heptanal, hexanal, 4-methylphenol, nonanal, and octanal. Of the eight compounds, 4-methylphenol and 2,3-butanedione were the compounds that exceeded their odor thresholds the most frequently. Four HAPs were identified in lagoon and barn samples that were also classified as having significantly larger lagoon and barn emissions in comparison to other NMVOCs. These were methanol, 4-methylphenol, acetaldehyde and MEK. The overall average lagoon fluxes and the overall average normalized barn emissions for the reported NMVOCs were used to estimate their swine CAFO emissions for North Carolina. Three NMVOCs were estimated to have considerably larger North Carolina swine CAFO emissions than the other NMVOCs. These were ethanol, acetone and methanol, with emissions of 206,367 kg yr-1, 134,765 kg yr-1 and 134,732 kg yr-1, respectively. The majority of individual compounds' North Carolina swine CAFO emissions were from barns, with barns contributing between 68.6% to ~ 100%.
NASA Astrophysics Data System (ADS)
Stern, Rowena F.; Picard, Kathryn T.; Hamilton, Kristina M.; Walne, Antony; Tarran, Glen A.; Mills, David; McQuatters-Gollop, Abigail; Edwards, Martin
2015-09-01
There is a paucity of data on long-term, spatially resolved changes in microbial diversity and biogeography in marine systems, and yet these organisms underpin fundamental ecological processes in the oceans affecting socio-economic values of the marine environment. We report results from a new autonomous Water and Microplankton Sampler (WaMS) that is carried within the Continuous Plankton Recorder (CPR). Whilst the CPR with its larger mesh size (270 μm), is designed to capture larger plankton, the WaMS was designed as an additional device to capture plankton below 50 μm and delicate larger species, often destroyed by net sampling methods. A 454 pyrosequencing and flow cytometric investigation of eukaryotic microbes using the partial 18S rDNA from thirteen WaMS samples collected over three months in the English Channel revealed a wide diversity of organisms. Alveolates, Fungi, and picoplanktonic Chlorophytes were the most common lineages captured despite the small sample volumes (200-250 ml). The survey also identified Cercozoa and MAST heterotrophic Stramenopiles, normally missed in microscopic-based plankton surveys. The most common was the likely parasitic LKM11 Rozellomycota lineage which comprised 43.2% of all reads and are rarely observed in marine pelagic surveys. An additional 9.5% of reads belonged to other parasitic lineages including marine Syndiniales and Ichthyosporea. Sample variation was considerable, indicating that microbial diversity is spatially or temporally patchy. Our study has shown that the WaMS sampling system is autonomous, versatile and robust, and due to its deployment on the established CPR network, is a cost-effective monitoring tool for microbial diversity for the detection of smaller and delicate taxa.
Downsizing Antenna Technologies for Mobile and Satellite Communications
NASA Technical Reports Server (NTRS)
Huang, J.; Densmore, A.; Tulintseff, A.; Jamnejad, V.
1993-01-01
Due to the increasing and stringent functional requirements (larger capacity, longer distances, etc.) of modern day communication systems, higher antenna gains are generally needed. This higher gain implies larger antenna size and mass which are undesirable to many systems. Consequently, downsizing antenna technology becomes one of the most critical areas for research and development efforts. Techniques to reduce antenna size can be categorized and are briefly discussed.
7 CFR 29.9406 - Failure of warehouse to comply with opening and selling schedule.
Code of Federal Regulations, 2010 CFR
2010-01-01
... available on the next succeeding sales day. Any such adjustment which is within 100 pounds of the required...,000 pounds plus the larger of 3 pounds for each pound in excess of 5,000 pounds or 5,000 pounds; for the second violation, the adjustment shall be 5,000 pounds plus the larger of 5 pounds for each pound...
ERIC Educational Resources Information Center
Masters, James S.
2010-01-01
With the need for larger and larger banks of items to support adaptive testing and to meet security concerns, large-scale item generation is a requirement for many certification and licensure programs. As part of the mass production of items, it is critical that the difficulty and the discrimination of the items be known without the need for…
Quantitative analysis of nano-pore geomaterials and representative sampling for digital rock physics
NASA Astrophysics Data System (ADS)
Yoon, H.; Dewers, T. A.
2014-12-01
Geomaterials containing nano-pores (e.g., shales and carbonate rocks) have become increasingly important for emerging problems such as unconventional gas and oil resources, enhanced oil recovery, and geologic storage of CO2. Accurate prediction of coupled geophysical and chemical processes at the pore scale requires realistic representation of pore structure and topology. This is especially true for chalk materials, where pore networks are small and complex, and require characterization at sub-micron scale. In this work, we apply laser scanning confocal microscopy to characterize pore structures and microlithofacies at micron- and greater scales and dual focused ion beam-scanning electron microscopy (FIB-SEM) for 3D imaging of nanometer-to-micron scale microcracks and pore distributions. With imaging techniques advanced for nano-pore characterization, a problem of scale with FIB-SEM images is how to take nanometer scale information and apply it to the thin-section or larger scale. In this work, several texture characterization techniques including graph-based spectral segmentation, support vector machine, and principal component analysis are applied for segmentation clusters represented by 1-2 FIB-SEM samples per each cluster. Geometric and topological properties are analyzed and lattice-Boltzmann method (LBM) is used to obtain permeability at several different scales. Upscaling of permeability to the Darcy scale (e.g., the thin-section scale) with image dataset will be discussed with emphasis on understanding microfracture-matrix interaction, representative volume for FIB-SEM sampling, and multiphase flow and reactive transport. Funding from the DOE Basic Energy Sciences Geosciences Program is gratefully acknowledged. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Bradac, Marusa; Coe, Dan; Strait, Victoria; Salmon, Brett; Hoag, Austin; Bradley, Larry; Ryan, Russell; Dawson, Will; Zitrin, Adi; Jones, Christine; Sharon, Keren; Trenti, Michele; Stark, Daniel; Oesch, Pascal; Lam, Danel; Carrasco Nunez, Daniela Patricia; Paterno-Mahler, Rachel; Frye, Brenda
2018-05-01
When did galaxies start forming stars? What is the role of distant galaxies in galaxy formation models and epoch of reionization? Recent observations indicate at least two critical puzzles in these studies. (1) First galaxies might have started forming stars earlier than previously thought (<400Myr after the Big Bang). (2) It is still unclear what is their star formation history and whether these galaxies can reionize the Universe. Accurate knowledge of stellar masses, ages, and star formation rates at this epoch requires measuring both rest-frame UV and optical light, which only Spitzer and HST can probe at z 6-11 for a large enough sample of typical galaxies. To address this cosmic puzzle, we propose to complete deep Spitzer imaging of the fields behind the 10 most powerful cosmic telescopes selected using HST, Spitzer, and Planck data from the RELICS and SRELICS programs (Reionization Lensing Cluster Survey; 41 clusters, 190 HST orbits, 440 Spitzer hours). 6 clusters out of 10 are still lacking deep data. This proposal will be a valuable Legacy complement to the existing IRAC deep surveys, and it will open up a new parameter space by probing the ordinary yet magnified population with much improved sample variance. The program will allow us to study stellar properties of a large number, 60 galaxies at z 6-11. Deep Spitzer data will be crucial to unambiguously measure their stellar properties (age, SFR, M*). Finally this proposal will establish the presence (or absence) of an unusually early established stellar population, as was recently observed in MACS1149JD at z 9. If confirmed in a larger sample, this result will require a paradigm shift in our understanding of the earliest star formation.
Simulations for the Development of Thermoelectric Measurements
NASA Astrophysics Data System (ADS)
Zabrocki, Knud; Ziolkowski, Pawel; Dasgupta, Titas; de Boor, Johannes; Müller, Eckhard
2013-07-01
In thermoelectricity, continuum theoretical equations are usually used for the calculation of the characteristics and performance of thermoelectric elements, modules or devices as a function of external parameters (material, geometry, temperatures, current, flow, load, etc.). An increasing number of commercial software packages aimed at applications, such as COMSOL and ANSYS, contain vkernels using direct thermoelectric coupling. Application of these numerical tools also allows analysis of physical measurement conditions and can lead to specifically adapted methods for developing special test equipment required for the determination of TE material and module properties. System-theoretical and simulation-based considerations of favorable geometries are taken into account to create draft sketches in the development of such measurement systems. Particular consideration is given to the development of transient measurement methods, which have great advantages compared with the conventional static methods in terms of the measurement duration required. In this paper the benefits of using numerical tools in designing measurement facilities are shown using two examples. The first is the determination of geometric correction factors in four-point probe measurement of electrical conductivity, whereas the second example is focused on the so-called combined thermoelectric measurement (CTEM) system, where all thermoelectric material properties (Seebeck coefficient, electrical and thermal conductivity, and Harman measurement of zT) are measured in a combined way. Here, we want to highlight especially the measurement of thermal conductivity in a transient mode. Factors influencing the measurement results such as coupling to the environment due to radiation, heat losses via the mounting of the probe head, as well as contact resistance between the sample and sample holder are illustrated, analyzed, and discussed. By employing the results of the simulations, we have developed an improved sample head that allows for measurements over a larger temperature interval with enhanced accuracy.