7 CFR 201.42 - Small containers.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Small containers. 201.42 Section 201.42 Agriculture... REGULATIONS Sampling in the Administration of the Act § 201.42 Small containers. In sampling seed in small containers that it is not practical to sample as required in § 201.41, a portion of one unopened container or...
Design of portable ultraminiature flow cytometers for medical diagnostics
NASA Astrophysics Data System (ADS)
Leary, James F.
2018-02-01
Design of portable microfluidic flow/image cytometry devices for measurements in the field (e.g. initial medical diagnostics) requires careful design in terms of power requirements and weight to allow for realistic portability. True portability with high-throughput microfluidic systems also requires sampling systems without the need for sheath hydrodynamic focusing both to avoid the need for sheath fluid and to enable higher volumes of actual sample, rather than sheath/sample combinations. Weight/power requirements dictate use of super-bright LEDs with top-hat excitation beam architectures and very small silicon photodiodes or nanophotonic sensors that can both be powered by small batteries. Signal-to-noise characteristics can be greatly improved by appropriately pulsing the LED excitation sources and sampling and subtracting noise in between excitation pulses. Microfluidic cytometry also requires judicious use of small sample volumes and appropriate statistical sampling by microfluidic cytometry or imaging for adequate statistical significance to permit real-time (typically in less than 15 minutes) initial medical decisions for patients in the field. This is not something conventional cytometry traditionally worries about, but is very important for development of small, portable microfluidic devices with small-volume throughputs. It also provides a more reasonable alternative to conventional tubes of blood when sampling geriatric and newborn patients for whom a conventional peripheral blood draw can be problematical. Instead one or two drops of blood obtained by pin-prick should be able to provide statistically meaningful results for use in making real-time medical decisions without the need for blood fractionation, which is not realistic in the doctor's office or field.
Portable imaging system method and apparatus
Freifeld, Barry M.; Kneafsley, Timothy J.; Pruess, Jacob; Tomutsa, Liviu; Reiter, Paul A.; deCastro, Ted M.
2006-07-25
An operator shielded X-ray imaging system has sufficiently low mass (less than 300 kg) and is compact enough to enable portability by reducing operator shielding requirements to a minimum shielded volume. The resultant shielded volume may require a relatively small mass of shielding in addition to the already integrally shielded X-ray source, intensifier, and detector. The system is suitable for portable imaging of well cores at remotely located well drilling sites. The system accommodates either small samples, or small cross-sectioned objects of unlimited length. By rotating samples relative to the imaging device, the information required for computer aided tomographic reconstruction may be obtained. By further translating the samples relative to the imaging system, fully three dimensional (3D) tomographic reconstructions may be obtained of samples having arbitrary length.
Microgravity Testing of a Surface Sampling System for Sample Return from Small Solar System Bodies
NASA Technical Reports Server (NTRS)
Franzen, M. A.; Preble, J.; Schoenoff, M.; Halona, K.; Long, T. E.; Park, T.; Sears, D. W. G.
2004-01-01
The return of samples from solar system bodies is becoming an essential element of solar system exploration. The recent National Research Council Solar System Exploration Decadal Survey identified six sample return missions as high priority missions: South-Aitken Basin Sample Return, Comet Surface Sample Return, Comet Surface Sample Return-sample from selected surface sites, Asteroid Lander/Rover/Sample Return, Comet Nucleus Sample Return-cold samples from depth, and Mars Sample Return [1] and the NASA Roadmap also includes sample return missions [2] . Sample collection methods that have been flown on robotic spacecraft to date return subgram quantities, but many scientific issues (like bulk composition, particle size distributions, petrology, chronology) require tens to hundreds of grams of sample. Many complex sample collection devices have been proposed, however, small robotic missions require simplicity. We present here the results of experiments done with a simple but innovative collection system for sample return from small solar system bodies.
Short-term memory for responses: the "choose-small" effect.
Fetterman, J G; MacEwen, D
1989-01-01
Pigeons' short-term memory for fixed-ratio requirements was assessed using a delayed symbolic matching-to-sample procedure. Different choices were reinforced after fixed-ratio 10 and fixed-ratio 40 requirements, and delays of 0, 5, or 20 s were sometimes placed between sample ratios and choice. All birds made disproportionate numbers of responses to the small-ratio choice alternative when delays were interposed between ratios and choice, and this bias increased as a function of delay. Preference for the small fixed-ratio alternative was also observed on "no-sample" trials, during which the choice alternatives were presented without a prior sample ratio. This "choose-small" bias is analogous to results obtained by Spetch and Wilkie (1983) with event duration as the discriminative stimulus. The choose-small bias was attenuated when the houselight was turned on during delays, but overall accuracy was not influenced systematically by the houselight manipulation. PMID:2584917
Study of sample drilling techniques for Mars sample return missions
NASA Technical Reports Server (NTRS)
Mitchell, D. C.; Harris, P. T.
1980-01-01
To demonstrate the feasibility of acquiring various surface samples for a Mars sample return mission the following tasks were performed: (1) design of a Mars rover-mounted drill system capable of acquiring crystalline rock cores; prediction of performance, mass, and power requirements for various size systems, and the generation of engineering drawings; (2) performance of simulated permafrost coring tests using a residual Apollo lunar surface drill, (3) design of a rock breaker system which can be used to produce small samples of rock chips from rocks which are too large to return to Earth, but too small to be cored with the Rover-mounted drill; (4)design of sample containers for the selected regolith cores, rock cores, and small particulate or rock samples; and (5) design of sample handling and transfer techniques which will be required through all phase of sample acquisition, processing, and stowage on-board the Earth return vehicle. A preliminary design of a light-weight Rover-mounted sampling scoop was also developed.
Analysis of peptides using an integrated microchip HPLC-MS/MS system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirby, Brian J.; Chirica, Gabriela S.; Reichmuth, David S.
Hyphendated LC-MS techniques are quickly becoming the standard tool for protemic analyses. For large homogeneous samples, bulk processing methods and capillary injection and separation techniques are suitable. However, for analysis of small or heterogeneous samples, techniques that can manipulate picoliter samples without dilution are required or samples will be lost or corrupted; further, static nanospray-type flowrates are required to maximize SNR. Microchip-level integration of sample injection with separation and mass spectrometry allow small-volume analytes to be processed on chip and immediately injected without dilution for analysis. An on-chip HPLC was fabricated using in situ polymerization of both fixed and mobilemore » polymer monoliths. Integration of the chip with a nanospray MS emitter enables identification of peptides by the use of tandem MS. The chip is capable of analyzing of very small sample volumes (< 200 pl) in short times (< 3 min).« less
Note: Small anaerobic chamber for optical spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chauvet, Adrien A. P., E-mail: adrien.chauvet@gmail.com; Chergui, Majed; Agarwal, Rachna
2015-10-15
The study of oxygen-sensitive biological samples requires an effective control of the atmosphere in which they are housed. In this aim however, no commercial anaerobic chamber is adequate to solely enclose the sample and small enough to fit in a compact spectroscopic system with which analysis can be performed. Furthermore, spectroscopic analysis requires the probe beam to pass through the whole chamber, introducing a requirement for adequate windows. In response to these challenges, we present a 1 l anaerobic chamber that is suitable for broad-band spectroscopic analysis. This chamber has the advantage of (1) providing access, via a septum, tomore » the sample and (2) allows the sample position to be adjusted while keeping the chamber fixed and hermetic during the experiment.« less
Autonomous Sample Acquisition for Planetary and Small Body Explorations
NASA Technical Reports Server (NTRS)
Ghavimi, Ali R.; Serricchio, Frederick; Dolgin, Ben; Hadaegh, Fred Y.
2000-01-01
Robotic drilling and autonomous sample acquisition are considered as the key technology requirements in future planetary or small body exploration missions. Core sampling or subsurface drilling operation is envisioned to be off rovers or landers. These supporting platforms are inherently flexible, light, and can withstand only limited amount of reaction forces and torques. This, together with unknown properties of sampled materials, makes the sampling operation a tedious task and quite challenging. This paper highlights the recent advancements in the sample acquisition control system design and development for the in situ scientific exploration of planetary and small interplanetary missions.
Minetti, Andrea; Riera-Montes, Margarita; Nackers, Fabienne; Roederer, Thomas; Koudika, Marie Hortense; Sekkenes, Johanne; Taconet, Aurore; Fermon, Florence; Touré, Albouhary; Grais, Rebecca F; Checchi, Francesco
2012-10-12
Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes.
2012-01-01
Background Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. Methods We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. Results VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Conclusions Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes. PMID:23057445
76 FR 62044 - Alternative Testing Requirements for Small Batch Manufacturers
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-06
... every manufacturer of a children's product that is subject to a children's product safety rule shall submit sufficient samples of the children's product, or samples that are identical in all material... compliance with such children's product safety rule. Further, section 14(i)(2) requires continued testing of...
NASA Astrophysics Data System (ADS)
Lateh, Masitah Abdul; Kamilah Muda, Azah; Yusof, Zeratul Izzah Mohd; Azilah Muda, Noor; Sanusi Azmi, Mohd
2017-09-01
The emerging era of big data for past few years has led to large and complex data which needed faster and better decision making. However, the small dataset problems still arise in a certain area which causes analysis and decision are hard to make. In order to build a prediction model, a large sample is required as a training sample of the model. Small dataset is insufficient to produce an accurate prediction model. This paper will review an artificial data generation approach as one of the solution to solve the small dataset problem.
Sample size calculation for a proof of concept study.
Yin, Yin
2002-05-01
Sample size calculation is vital for a confirmatory clinical trial since the regulatory agencies require the probability of making Type I error to be significantly small, usually less than 0.05 or 0.025. However, the importance of the sample size calculation for studies conducted by a pharmaceutical company for internal decision making, e.g., a proof of concept (PoC) study, has not received enough attention. This article introduces a Bayesian method that identifies the information required for planning a PoC and the process of sample size calculation. The results will be presented in terms of the relationships between the regulatory requirements, the probability of reaching the regulatory requirements, the goalpost for PoC, and the sample size used for PoC.
Gas-driven pump for ground-water samples
Signor, Donald C.
1978-01-01
Observation wells installed for artificial-recharge research and other wells used in different ground-water programs are frequently cased with small-diameter steel pipe. To obtain samples from these small-diameter wells in order to monitor water quality, and to calibrate solute-transport models, a small-diameter pump with unique operating characteristics is required that causes a minimum alternation of samples during field sampling. A small-diameter gas-driven pump was designed and built to obtain water samples from wells of two-inch diameter or larger. The pump is a double-piston type with the following characteristics: (1) The water sample is isolated from the operating gas, (2) no source of electricity is ncessary, (3) operation is continuous, (4) use of compressed gas is efficient, and (5) operation is reliable over extended periods of time. Principles of operation, actual operation techniques, gas-use analyses and operating experience are described. Complete working drawings and a component list are included. Recent modifications and pump construction for high-pressure applications also are described. (Woodard-USGS)
Sampling procedures for throughfall monitoring: A simulation study
NASA Astrophysics Data System (ADS)
Zimmermann, Beate; Zimmermann, Alexander; Lark, Richard Murray; Elsenbeer, Helmut
2010-01-01
What is the most appropriate sampling scheme to estimate event-based average throughfall? A satisfactory answer to this seemingly simple question has yet to be found, a failure which we attribute to previous efforts' dependence on empirical studies. Here we try to answer this question by simulating stochastic throughfall fields based on parameters for statistical models of large monitoring data sets. We subsequently sampled these fields with different sampling designs and variable sample supports. We evaluated the performance of a particular sampling scheme with respect to the uncertainty of possible estimated means of throughfall volumes. Even for a relative error limit of 20%, an impractically large number of small, funnel-type collectors would be required to estimate mean throughfall, particularly for small events. While stratification of the target area is not superior to simple random sampling, cluster random sampling involves the risk of being less efficient. A larger sample support, e.g., the use of trough-type collectors, considerably reduces the necessary sample sizes and eliminates the sensitivity of the mean to outliers. Since the gain in time associated with the manual handling of troughs versus funnels depends on the local precipitation regime, the employment of automatically recording clusters of long troughs emerges as the most promising sampling scheme. Even so, a relative error of less than 5% appears out of reach for throughfall under heterogeneous canopies. We therefore suspect a considerable uncertainty of input parameters for interception models derived from measured throughfall, in particular, for those requiring data of small throughfall events.
Liebi, Marianne; Georgiadis, Marios; Kohlbrecher, Joachim; Holler, Mirko; Raabe, Jörg; Usov, Ivan; Menzel, Andreas; Schneider, Philipp; Bunk, Oliver; Guizar-Sicairos, Manuel
2018-01-01
Small-angle X-ray scattering tensor tomography, which allows reconstruction of the local three-dimensional reciprocal-space map within a three-dimensional sample as introduced by Liebi et al. [Nature (2015), 527, 349-352], is described in more detail with regard to the mathematical framework and the optimization algorithm. For the case of trabecular bone samples from vertebrae it is shown that the model of the three-dimensional reciprocal-space map using spherical harmonics can adequately describe the measured data. The method enables the determination of nanostructure orientation and degree of orientation as demonstrated previously in a single momentum transfer q range. This article presents a reconstruction of the complete reciprocal-space map for the case of bone over extended ranges of q. In addition, it is shown that uniform angular sampling and advanced regularization strategies help to reduce the amount of data required.
Ide, Jun'ichiro; Chiwa, Masaaki; Higashi, Naoko; Maruno, Ryoko; Mori, Yasushi; Otsuki, Kyoichi
2012-08-01
This study sought to determine the lowest number of storm events required for adequate estimation of annual nutrient loads from a forested watershed using the regression equation between cumulative load (∑L) and cumulative stream discharge (∑Q). Hydrological surveys were conducted for 4 years, and stream water was sampled sequentially at 15-60-min intervals during 24 h in 20 events, as well as weekly in a small forested watershed. The bootstrap sampling technique was used to determine the regression (∑L-∑Q) equations of dissolved nitrogen (DN) and phosphorus (DP), particulate nitrogen (PN) and phosphorus (PP), dissolved inorganic nitrogen (DIN), and suspended solid (SS) for each dataset of ∑L and ∑Q. For dissolved nutrients (DN, DP, DIN), the coefficient of variance (CV) in 100 replicates of 4-year average annual load estimates was below 20% with datasets composed of five storm events. For particulate nutrients (PN, PP, SS), the CV exceeded 20%, even with datasets composed of more than ten storm events. The differences in the number of storm events required for precise load estimates between dissolved and particulate nutrients were attributed to the goodness of fit of the ∑L-∑Q equations. Bootstrap simulation based on flow-stratified sampling resulted in fewer storm events than the simulation based on random sampling and showed that only three storm events were required to give a CV below 20% for dissolved nutrients. These results indicate that a sampling design considering discharge levels reduces the frequency of laborious chemical analyses of water samples required throughout the year.
16 CFR 1631.34 - Small carpets and rugs not meeting acceptance criterion.
Code of Federal Regulations, 2011 CFR
2011-01-01
... proximity to the edge of the small carpet or rug. (b) Such cautionary statements shall also appear in a... or effect the sale of small carpets or rugs shall be labeled with the information required by this... samples, swatches, or specimens used to promote or effect the sale thereof, which is not in compliance...
16 CFR 1631.34 - Small carpets and rugs not meeting acceptance criterion.
Code of Federal Regulations, 2014 CFR
2014-01-01
... proximity to the edge of the small carpet or rug. (b) Such cautionary statements shall also appear in a... or effect the sale of small carpets or rugs shall be labeled with the information required by this... samples, swatches, or specimens used to promote or effect the sale thereof, which is not in compliance...
16 CFR § 1631.34 - Small carpets and rugs not meeting acceptance criterion.
Code of Federal Regulations, 2013 CFR
2013-01-01
... proximity to the edge of the small carpet or rug. (b) Such cautionary statements shall also appear in a... or effect the sale of small carpets or rugs shall be labeled with the information required by this... samples, swatches, or specimens used to promote or effect the sale thereof, which is not in compliance...
16 CFR 1631.34 - Small carpets and rugs not meeting acceptance criterion.
Code of Federal Regulations, 2012 CFR
2012-01-01
... proximity to the edge of the small carpet or rug. (b) Such cautionary statements shall also appear in a... or effect the sale of small carpets or rugs shall be labeled with the information required by this... samples, swatches, or specimens used to promote or effect the sale thereof, which is not in compliance...
Treating Sample Covariances for Use in Strongly Coupled Atmosphere-Ocean Data Assimilation
NASA Astrophysics Data System (ADS)
Smith, Polly J.; Lawless, Amos S.; Nichols, Nancy K.
2018-01-01
Strongly coupled data assimilation requires cross-domain forecast error covariances; information from ensembles can be used, but limited sampling means that ensemble derived error covariances are routinely rank deficient and/or ill-conditioned and marred by noise. Thus, they require modification before they can be incorporated into a standard assimilation framework. Here we compare methods for improving the rank and conditioning of multivariate sample error covariance matrices for coupled atmosphere-ocean data assimilation. The first method, reconditioning, alters the matrix eigenvalues directly; this preserves the correlation structures but does not remove sampling noise. We show that it is better to recondition the correlation matrix rather than the covariance matrix as this prevents small but dynamically important modes from being lost. The second method, model state-space localization via the Schur product, effectively removes sample noise but can dampen small cross-correlation signals. A combination that exploits the merits of each is found to offer an effective alternative.
Forcino, Frank L; Leighton, Lindsey R; Twerdy, Pamela; Cahill, James F
2015-01-01
Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis) to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample) also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1) to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2) to determine the dataset parameters (i.e., evenness, number of taxa, number of samples) that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project), statistically viable results can still be obtained with less of an investment.
Design of point-of-care (POC) microfluidic medical diagnostic devices
NASA Astrophysics Data System (ADS)
Leary, James F.
2018-02-01
Design of inexpensive and portable hand-held microfluidic flow/image cytometry devices for initial medical diagnostics at the point of initial patient contact by emergency medical personnel in the field requires careful design in terms of power/weight requirements to allow for realistic portability as a hand-held, point-of-care medical diagnostics device. True portability also requires small micro-pumps for high-throughput capability. Weight/power requirements dictate use of super-bright LEDs and very small silicon photodiodes or nanophotonic sensors that can be powered by batteries. Signal-to-noise characteristics can be greatly improved by appropriately pulsing the LED excitation sources and sampling and subtracting noise in between excitation pulses. The requirements for basic computing, imaging, GPS and basic telecommunications can be simultaneously met by use of smartphone technologies, which become part of the overall device. Software for a user-interface system, limited real-time computing, real-time imaging, and offline data analysis can be accomplished through multi-platform software development systems that are well-suited to a variety of currently available cellphone technologies which already contain all of these capabilities. Microfluidic cytometry requires judicious use of small sample volumes and appropriate statistical sampling by microfluidic cytometry or imaging for adequate statistical significance to permit real-time (typically < 15 minutes) medical decisions for patients at the physician's office or real-time decision making in the field. One or two drops of blood obtained by pin-prick should be able to provide statistically meaningful results for use in making real-time medical decisions without the need for blood fractionation, which is not realistic in the field.
Strelka: accurate somatic small-variant calling from sequenced tumor-normal sample pairs.
Saunders, Christopher T; Wong, Wendy S W; Swamy, Sajani; Becq, Jennifer; Murray, Lisa J; Cheetham, R Keira
2012-07-15
Whole genome and exome sequencing of matched tumor-normal sample pairs is becoming routine in cancer research. The consequent increased demand for somatic variant analysis of paired samples requires methods specialized to model this problem so as to sensitively call variants at any practical level of tumor impurity. We describe Strelka, a method for somatic SNV and small indel detection from sequencing data of matched tumor-normal samples. The method uses a novel Bayesian approach which represents continuous allele frequencies for both tumor and normal samples, while leveraging the expected genotype structure of the normal. This is achieved by representing the normal sample as a mixture of germline variation with noise, and representing the tumor sample as a mixture of the normal sample with somatic variation. A natural consequence of the model structure is that sensitivity can be maintained at high tumor impurity without requiring purity estimates. We demonstrate that the method has superior accuracy and sensitivity on impure samples compared with approaches based on either diploid genotype likelihoods or general allele-frequency tests. The Strelka workflow source code is available at ftp://strelka@ftp.illumina.com/. csaunders@illumina.com
Breaking Free of Sample Size Dogma to Perform Innovative Translational Research
Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.
2011-01-01
Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197
Adaption of G-TAG Software for Validating Touch and Go Asteroid Sample Return Design Methodology
NASA Technical Reports Server (NTRS)
Blackmore, Lars James C.; Acikmese, Behcet; Mandic, Milan
2012-01-01
A software tool is used to demonstrate the feasibility of Touch and Go (TAG) sampling for Asteroid Sample Return missions. TAG is a concept whereby a spacecraft is in contact with the surface of a small body, such as a comet or asteroid, for a few seconds or less before ascending to a safe location away from the small body. Previous work at JPL developed the G-TAG simulation tool, which provides a software environment for fast, multi-body simulations of the TAG event. G-TAG is described in Multibody Simulation Software Testbed for Small-Body Exploration and Sampling, (NPO-47196) NASA Tech Briefs, Vol. 35, No. 11 (November 2011), p.54. This current innovation adapts this tool to a mission that intends to return a sample from the surface of an asteroid. In order to demonstrate the feasibility of the TAG concept, the new software tool was used to generate extensive simulations that demonstrate the designed spacecraft meets key requirements. These requirements state that contact force and duration must be sufficient to ensure that enough material from the surface is collected in the brushwheel sampler (BWS), and that the spacecraft must survive the contact and must be able to recover and ascend to a safe position, and maintain velocity and orientation after the contact.
Monitoring Species of Concern Using Noninvasive Genetic Sampling and Capture-Recapture Methods
2016-11-01
ABBREVIATIONS AICc Akaike’s Information Criterion with small sample size correction AZGFD Arizona Game and Fish Department BMGR Barry M. Goldwater...MNKA Minimum Number Known Alive N Abundance Ne Effective Population Size NGS Noninvasive Genetic Sampling NGS-CR Noninvasive Genetic...parameter estimates from capture-recapture models require sufficient sample sizes , capture probabilities and low capture biases. For NGS-CR, sample
ERIC Educational Resources Information Center
Chromy, James R.
This study addressed statistical techniques that might ameliorate some of the sampling problems currently facing states with small populations participating in State National Assessment of Educational Progress (NAEP) assessments. The study explored how the application of finite population correction factors to the between-school component of…
Taniguchi, Hidetaka; Sato, Hiroshi; Shirakawa, Tomohiro
2018-05-09
Human learners can generalize a new concept from a small number of samples. In contrast, conventional machine learning methods require large amounts of data to address the same types of problems. Humans have cognitive biases that promote fast learning. Here, we developed a method to reduce the gap between human beings and machines in this type of inference by utilizing cognitive biases. We implemented a human cognitive model into machine learning algorithms and compared their performance with the currently most popular methods, naïve Bayes, support vector machine, neural networks, logistic regression and random forests. We focused on the task of spam classification, which has been studied for a long time in the field of machine learning and often requires a large amount of data to obtain high accuracy. Our models achieved superior performance with small and biased samples in comparison with other representative machine learning methods.
Evaluation of the Biological Sampling Kit (BiSKit) for Large-Area Surface Sampling
Buttner, Mark P.; Cruz, Patricia; Stetzenbach, Linda D.; Klima-Comba, Amy K.; Stevens, Vanessa L.; Emanuel, Peter A.
2004-01-01
Current surface sampling methods for microbial contaminants are designed to sample small areas and utilize culture analysis. The total number of microbes recovered is low because a small area is sampled, making detection of a potential pathogen more difficult. Furthermore, sampling of small areas requires a greater number of samples to be collected, which delays the reporting of results, taxes laboratory resources and staffing, and increases analysis costs. A new biological surface sampling method, the Biological Sampling Kit (BiSKit), designed to sample large areas and to be compatible with testing with a variety of technologies, including PCR and immunoassay, was evaluated and compared to other surface sampling strategies. In experimental room trials, wood laminate and metal surfaces were contaminated by aerosolization of Bacillus atrophaeus spores, a simulant for Bacillus anthracis, into the room, followed by settling of the spores onto the test surfaces. The surfaces were sampled with the BiSKit, a cotton-based swab, and a foam-based swab. Samples were analyzed by culturing, quantitative PCR, and immunological assays. The results showed that the large surface area (1 m2) sampled with the BiSKit resulted in concentrations of B. atrophaeus in samples that were up to 10-fold higher than the concentrations obtained with the other methods tested. A comparison of wet and dry sampling with the BiSKit indicated that dry sampling was more efficient (efficiency, 18.4%) than wet sampling (efficiency, 11.3%). The sensitivities of detection of B. atrophaeus on metal surfaces were 42 ± 5.8 CFU/m2 for wet sampling and 100.5 ± 10.2 CFU/m2 for dry sampling. These results demonstrate that the use of a sampling device capable of sampling larger areas results in higher sensitivity than that obtained with currently available methods and has the advantage of sampling larger areas, thus requiring collection of fewer samples per site. PMID:15574898
High-resolution magnetic resonance spectroscopy using a solid-state spin sensor
NASA Astrophysics Data System (ADS)
Glenn, David R.; Bucher, Dominik B.; Lee, Junghyun; Lukin, Mikhail D.; Park, Hongkun; Walsworth, Ronald L.
2018-03-01
Quantum systems that consist of solid-state electronic spins can be sensitive detectors of nuclear magnetic resonance (NMR) signals, particularly from very small samples. For example, nitrogen–vacancy centres in diamond have been used to record NMR signals from nanometre-scale samples, with sensitivity sufficient to detect the magnetic field produced by a single protein. However, the best reported spectral resolution for NMR of molecules using nitrogen–vacancy centres is about 100 hertz. This is insufficient to resolve the key spectral identifiers of molecular structure that are critical to NMR applications in chemistry, structural biology and materials research, such as scalar couplings (which require a resolution of less than ten hertz) and small chemical shifts (which require a resolution of around one part per million of the nuclear Larmor frequency). Conventional, inductively detected NMR can provide the necessary high spectral resolution, but its limited sensitivity typically requires millimetre-scale samples, precluding applications that involve smaller samples, such as picolitre-volume chemical analysis or correlated optical and NMR microscopy. Here we demonstrate a measurement technique that uses a solid-state spin sensor (a magnetometer) consisting of an ensemble of nitrogen–vacancy centres in combination with a narrowband synchronized readout protocol to obtain NMR spectral resolution of about one hertz. We use this technique to observe NMR scalar couplings in a micrometre-scale sample volume of approximately ten picolitres. We also use the ensemble of nitrogen–vacancy centres to apply NMR to thermally polarized nuclear spins and resolve chemical-shift spectra from small molecules. Our technique enables analytical NMR spectroscopy at the scale of single cells.
Accelerator mass spectrometry of small biological samples.
Salehpour, Mehran; Forsgard, Niklas; Possnert, Göran
2008-12-01
Accelerator mass spectrometry (AMS) is an ultra-sensitive technique for isotopic ratio measurements. In the biomedical field, AMS can be used to measure femtomolar concentrations of labeled drugs in body fluids, with direct applications in early drug development such as Microdosing. Likewise, the regenerative properties of cells which are of fundamental significance in stem-cell research can be determined with an accuracy of a few years by AMS analysis of human DNA. However, AMS nominally requires about 1 mg of carbon per sample which is not always available when dealing with specific body substances such as localized, organ-specific DNA samples. Consequently, it is of analytical interest to develop methods for the routine analysis of small samples in the range of a few tens of microg. We have used a 5 MV Pelletron tandem accelerator to study small biological samples using AMS. Different methods are presented and compared. A (12)C-carrier sample preparation method is described which is potentially more sensitive and less susceptible to contamination than the standard procedures.
Estimating accuracy of land-cover composition from two-stage cluster sampling
Stehman, S.V.; Wickham, J.D.; Fattorini, L.; Wade, T.D.; Baffetta, F.; Smith, J.H.
2009-01-01
Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), root mean square error (RMSE), and correlation (CORR) to quantify accuracy of land-cover composition for a general two-stage cluster sampling design, and for the special case of simple random sampling without replacement (SRSWOR) at each stage. The bias of the estimators for the two-stage SRSWOR design is evaluated via a simulation study. The estimators of RMSE and CORR have small bias except when sample size is small and the land-cover class is rare. The estimator of MAD is biased for both rare and common land-cover classes except when sample size is large. A general recommendation is that rare land-cover classes require large sample sizes to ensure that the accuracy estimators have small bias. ?? 2009 Elsevier Inc.
Lizoňová, Zuzana; Horsák, Michal
2017-04-01
Ecological studies of peatland testate amoebae are generally based on totals of 150 individuals per sample. However, the suitability of this standard has never been assessed for alkaline habitats such as spring fens. We explored the differences in testate amoeba diversity between Sphagnum and brown-moss microhabitats at a mire site with a highly diversified moss layer which reflects the small-scale heterogeneity in groundwater chemistry. Relationships between sampling efficiency and sample completeness were explored using individual-based species accumulation curves and the effort required to gain an extra species was assessed. Testate amoeba diversity differed substantially between microhabitats, with brown mosses hosting on average twice as many species and requiring greater shell totals to reach comparable sample analysis efficiency as for Sphagnum. Thus, for samples from alkaline conditions an increase in shell totals would be required and even an overall doubling up to 300 individuals might be considered for reliable community description. Our small-scale data are likely not robust enough to provide an ultimate solution for the optimization of shell totals. However, the results proved that testate amoebae communities from acidic and alkaline environments differ sharply in both species richness and composition and they might call for different methodological approaches. Copyright © 2017 Elsevier GmbH. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-11
...EPA is issuing a direct final rule to amend the diesel sulfur regulations to allow refiners, importers, distributors, and retailers of highway diesel fuel the option to use an alternative affirmative defense if the Agency finds highway diesel fuel samples above the specified sulfur standard at retail facilities. This alternative defense consists of a comprehensive program of quality assurance sampling and testing that would cover all participating companies that produce and/or distribute highway diesel fuel if certain other conditions are met. The sampling and testing program would be carried out by an independent surveyor. The program would be conducted pursuant to a survey plan approved by EPA that is designed to achieve the same objectives as the current regulatory quality assurance requirement. This rule also amends the gasoline benzene regulations to allow disqualified small refiners the same opportunity to generate gasoline benzene credits as that afforded to non-small refiners.
Apparatus for Measuring Total Emissivity of Small, Low-Emissivity Samples
NASA Technical Reports Server (NTRS)
Tuttle, James; DiPirro, Michael J.
2011-01-01
An apparatus was developed for measuring total emissivity of small, lightweight, low-emissivity samples at low temperatures. The entire apparatus fits inside a small laboratory cryostat. Sample installation and removal are relatively quick, allowing for faster testing. The small chamber surrounding the sample is lined with black-painted aluminum honeycomb, which simplifies data analysis. This results in the sample viewing a very high-emissivity surface on all sides, an effect which would normally require a much larger chamber volume. The sample and chamber temperatures are individually controlled using off-the-shelf PID (proportional integral derivative) controllers, allowing flexibility in the test conditions. The chamber can be controlled at a higher temperature than the sample, allowing a direct absorptivity measurement. The lightweight sample is suspended by its heater and thermometer leads from an isothermal bar external to the chamber. The wires run out of the chamber through small holes in its corners, and the wires do not contact the chamber itself. During a steady-state measurement, the thermometer and bar are individually controlled at the same temperature, so there is zero heat flow through the wires. Thus, all of sample-temperature-control heater power is radiated to the chamber. Double-aluminized Kapton (DAK) emissivity was studied down to 10 K, which was about 25 K colder than any previously reported measurements. This verified a minimum in the emissivity at about 35 K and a rise as the temperature dropped to lower values.
Cocco, Arturo; Serra, Giuseppe; Lentini, Andrea; Deliperi, Salvatore; Delrio, Gavino
2015-09-01
The within- and between-plant distribution of the tomato leafminer, Tuta absoluta (Meyrick), was investigated in order to define action thresholds based on leaf infestation and to propose enumerative and binomial sequential sampling plans for pest management applications in protected crops. The pest spatial distribution was aggregated between plants, and median leaves were the most suitable sample to evaluate the pest density. Action thresholds of 36 and 48%, 43 and 56% and 60 and 73% infested leaves, corresponding to economic thresholds of 1 and 3% damaged fruits, were defined for tomato cultivars with big, medium and small fruits respectively. Green's method was a more suitable enumerative sampling plan as it required a lower sampling effort. Binomial sampling plans needed lower average sample sizes than enumerative plans to make a treatment decision, with probabilities of error of <0.10. The enumerative sampling plan required 87 or 343 leaves to estimate the population density in extensive or intensive ecological studies respectively. Binomial plans would be more practical and efficient for control purposes, needing average sample sizes of 17, 20 and 14 leaves to take a pest management decision in order to avoid fruit damage higher than 1% in cultivars with big, medium and small fruits respectively. © 2014 Society of Chemical Industry.
Improving small-angle X-ray scattering data for structural analyses of the RNA world
Rambo, Robert P.; Tainer, John A.
2010-01-01
Defining the shape, conformation, or assembly state of an RNA in solution often requires multiple investigative tools ranging from nucleotide analog interference mapping to X-ray crystallography. A key addition to this toolbox is small-angle X-ray scattering (SAXS). SAXS provides direct structural information regarding the size, shape, and flexibility of the particle in solution and has proven powerful for analyses of RNA structures with minimal requirements for sample concentration and volumes. In principle, SAXS can provide reliable data on small and large RNA molecules. In practice, SAXS investigations of RNA samples can show inconsistencies that suggest limitations in the SAXS experimental analyses or problems with the samples. Here, we show through investigations on the SAM-I riboswitch, the Group I intron P4-P6 domain, 30S ribosomal subunit from Sulfolobus solfataricus (30S), brome mosaic virus tRNA-like structure (BMV TLS), Thermotoga maritima asd lysine riboswitch, the recombinant tRNAval, and yeast tRNAphe that many problems with SAXS experiments on RNA samples derive from heterogeneity of the folded RNA. Furthermore, we propose and test a general approach to reducing these sample limitations for accurate SAXS analyses of RNA. Together our method and results show that SAXS with synchrotron radiation has great potential to provide accurate RNA shapes, conformations, and assembly states in solution that inform RNA biological functions in fundamental ways. PMID:20106957
LACIE large area acreage estimation. [United States of America
NASA Technical Reports Server (NTRS)
Chhikara, R. S.; Feiveson, A. H. (Principal Investigator)
1979-01-01
A sample wheat acreage for a large area is obtained by multiplying its small grains acreage estimate as computed by the classification and mensuration subsystem by the best available ratio of wheat to small grains acreages obtained from historical data. In the United States, as in other countries with detailed historical data, an additional level of aggregation was required because sample allocation was made at the substratum level. The essential features of the estimation procedure for LACIE countries are included along with procedures for estimating wheat acreage in the United States.
Subattomole sensitivity in biological accelerator mass spectrometry.
Salehpour, Mehran; Possnert, Göran; Bryhni, Helge
2008-05-15
The Uppsala University 5 MV Pelletron tandem accelerator has been used to study (14)C-labeled biological samples utilizing accelerator mass spectrometry (AMS) technology. We have adapted a sample preparation method for small biological samples down to a few tens of micrograms of carbon, involving among others, miniaturizing of the graphitization reactor. Standard AMS requires about 1 mg of carbon with a limit of quantitation of about 10 amol. Results are presented for a range of small sample sizes with concentrations down to below 1 pM of a pharmaceutical substance in human blood. It is shown that (14)C-labeled molecular markers can be routinely measured from the femtomole range down to a few hundred zeptomole (10 (-21) mol), without the use of any additional separation methods.
NASA Astrophysics Data System (ADS)
Johnsen, Elin; Leknes, Siri; Wilson, Steven Ray; Lundanes, Elsa
2015-03-01
Neurons communicate via chemical signals called neurotransmitters (NTs). The numerous identified NTs can have very different physiochemical properties (solubility, charge, size etc.), so quantification of the various NT classes traditionally requires several analytical platforms/methodologies. We here report that a diverse range of NTs, e.g. peptides oxytocin and vasopressin, monoamines adrenaline and serotonin, and amino acid GABA, can be simultaneously identified/measured in small samples, using an analytical platform based on liquid chromatography and high-resolution mass spectrometry (LC-MS). The automated platform is cost-efficient as manual sample preparation steps and one-time-use equipment are kept to a minimum. Zwitter-ionic HILIC stationary phases were used for both on-line solid phase extraction (SPE) and liquid chromatography (capillary format, cLC). This approach enabled compounds from all NT classes to elute in small volumes producing sharp and symmetric signals, and allowing precise quantifications of small samples, demonstrated with whole blood (100 microliters per sample). An additional robustness-enhancing feature is automatic filtration/filter back-flushing (AFFL), allowing hundreds of samples to be analyzed without any parts needing replacement. The platform can be installed by simple modification of a conventional LC-MS system.
NASA Astrophysics Data System (ADS)
Smith, Suzanne; Sewart, Rene; Land, Kevin; Roux, Pieter; Gärtner, Claudia; Becker, Holger
2016-03-01
Lab-on-a-chip devices are often applied to point-of-care diagnostic solutions as they are low-cost, compact, disposable, and require only small sample volumes. For such devices, various reagents are required for sample preparation and analysis and, for an integrated solution to be realized, on-chip reagent storage and automated introduction are required. This work describes the implementation and characterization of effective liquid reagent storage and release mechanisms utilizing blister pouches applied to various point-of-care diagnostic device applications. The manufacturing aspects as well as performance parameters are evaluated.
An internal pilot design for prospective cancer screening trials with unknown disease prevalence.
Brinton, John T; Ringham, Brandy M; Glueck, Deborah H
2015-10-13
For studies that compare the diagnostic accuracy of two screening tests, the sample size depends on the prevalence of disease in the study population, and on the variance of the outcome. Both parameters may be unknown during the design stage, which makes finding an accurate sample size difficult. To solve this problem, we propose adapting an internal pilot design. In this adapted design, researchers will accrue some percentage of the planned sample size, then estimate both the disease prevalence and the variances of the screening tests. The updated estimates of the disease prevalence and variance are used to conduct a more accurate power and sample size calculation. We demonstrate that in large samples, the adapted internal pilot design produces no Type I inflation. For small samples (N less than 50), we introduce a novel adjustment of the critical value to control the Type I error rate. We apply the method to two proposed prospective cancer screening studies: 1) a small oral cancer screening study in individuals with Fanconi anemia and 2) a large oral cancer screening trial. Conducting an internal pilot study without adjusting the critical value can cause Type I error rate inflation in small samples, but not in large samples. An internal pilot approach usually achieves goal power and, for most studies with sample size greater than 50, requires no Type I error correction. Further, we have provided a flexible and accurate approach to bound Type I error below a goal level for studies with small sample size.
Exploratory Factor Analysis with Small Sample Sizes
ERIC Educational Resources Information Center
de Winter, J. C. F.; Dodou, D.; Wieringa, P. A.
2009-01-01
Exploratory factor analysis (EFA) is generally regarded as a technique for large sample sizes ("N"), with N = 50 as a reasonable absolute minimum. This study offers a comprehensive overview of the conditions in which EFA can yield good quality results for "N" below 50. Simulations were carried out to estimate the minimum required "N" for different…
Meta-analysis of genome-wide association from genomic prediction models
USDA-ARS?s Scientific Manuscript database
A limitation of many genome-wide association studies (GWA) in animal breeding is that there are many loci with small effect sizes; thus, larger sample sizes (N) are required to guarantee suitable power of detection. To increase sample size, results from different GWA can be combined in a meta-analys...
Method to determine 226Ra in small sediment samples by ultralow background liquid scintillation.
Sanchez-Cabeza, Joan-Albert; Kwong, Laval Liong Wee; Betti, Maria
2010-08-15
(210)Pb dating of sediment cores is a widely used tool to reconstruct ecosystem evolution and historical pollution during the last century. Although (226)Ra can be determined by gamma spectrometry, this method shows severe limitations which are, among others, sample size requirements and counting times. In this work, we propose a new strategy based on the analysis of (210)Pb through (210)Po in equilibrium by alpha spectrometry, followed by the determination of (226)Ra (base or supported (210)Pb) without any further chemical purification by liquid scintillation and with a higher sample throughput. Although gamma spectrometry might still be required to determine (137)Cs as an independent tracer, the effort can then be focused only on those sections dated around 1963, when maximum activities are expected. In this work, we optimized the counting conditions, calibrated the system for changing quenching, and described the new method to determine (226)Ra in small sediment samples, after (210)Po determination, allowing a more precise determination of excess (210)Pb ((210)Pb(ex)). The method was validated with reference materials IAEA-384, IAEA-385, and IAEA-313.
Propulsion engineering study for small-scale Mars missions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitehead, J.
1995-09-12
Rocket propulsion options for small-scale Mars missions are presented and compared, particularly for the terminal landing maneuver and for sample return. Mars landing has a low propulsive {Delta}v requirement on a {approximately}1-minute time scale, but at a high acceleration. High thrust/weight liquid rocket technologies, or advanced pulse-capable solids, developed during the past decade for missile defense, are therefore more appropriate for small Mars landers than are conventional space propulsion technologies. The advanced liquid systems are characterize by compact lightweight thrusters having high chamber pressures and short lifetimes. Blowdown or regulated pressure-fed operation can satisfy the Mars landing requirement, but hardwaremore » mass can be reduced by using pumps. Aggressive terminal landing propulsion designs can enable post-landing hop maneuvers for some surface mobility. The Mars sample return mission requires a small high performance launcher having either solid motors or miniature pump-fed engines. Terminal propulsion for 100 kg Mars landers is within the realm of flight-proven thruster designs, but custom tankage is desirable. Landers on a 10 kg scale also are feasible, using technology that has been demonstrated but not previously flown in space. The number of sources and the selection of components are extremely limited on this smallest scale, so some customized hardware is required. A key characteristic of kilogram-scale propulsion is that gas jets are much lighter than liquid thrusters for reaction control. The mass and volume of tanks for inert gas can be eliminated by systems which generate gas as needed from a liquid or a solid, but these have virtually no space flight history. Mars return propulsion is a major engineering challenge; earth launch is the only previously-solved propulsion problem requiring similar or greater performance.« less
Small-scale dynamic confinement gap test
NASA Astrophysics Data System (ADS)
Cook, Malcolm
2011-06-01
Gap tests are routinely used to ascertain the shock sensitiveness of new explosive formulations. The tests are popular since that are easy and relatively cheap to perform. However, with modern insensitive formulations with big critical diameters, large test samples are required. This can make testing and screening of new formulations expensive since large quantities of test material are required. Thus a new test that uses significantly smaller sample quantities would be very beneficial. In this paper we describe a new small-scale test that has been designed using our CHARM ignition and growth routine in the DYNA2D hydrocode. The new test is a modified gap test and uses detonating nitromethane to provide dynamic confinement (instead of a thick metal case) whilst exposing the sample to a long duration shock wave. The long duration shock wave allows less reactive materials that are below their critical diameter, more time to react. We present details on the modelling of the test together with some preliminary experiments to demonstrate the potential of the new test method.
Code of Federal Regulations, 2014 CFR
2014-07-01
... as a Certified Public Accountant in the United States and a citizen of the United States, or be... of this section and § 80.415, including work papers. (G) Reports prepared for submission to EPA, and any work papers related to such reports. (vi) Inspections and audits by EPA may include taking samples...
Evaluation of respondent-driven sampling.
McCreesh, Nicky; Frost, Simon D W; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda N; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G
2012-01-01
Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total population data. Total population data on age, tribe, religion, socioeconomic status, sexual activity, and HIV status were available on a population of 2402 male household heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, using current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). We recruited 927 household heads. Full and small RDS samples were largely representative of the total population, but both samples underrepresented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven sampling statistical inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven sampling bootstrap 95% confidence intervals included the population proportion. Respondent-driven sampling produced a generally representative sample of this well-connected nonhidden population. However, current respondent-driven sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience sampling method, and caution is required when interpreting findings based on the sampling method.
Improving the analysis of composite endpoints in rare disease trials.
McMenamin, Martina; Berglind, Anna; Wason, James M S
2018-05-22
Composite endpoints are recommended in rare diseases to increase power and/or to sufficiently capture complexity. Often, they are in the form of responder indices which contain a mixture of continuous and binary components. Analyses of these outcomes typically treat them as binary, thus only using the dichotomisations of continuous components. The augmented binary method offers a more efficient alternative and is therefore especially useful for rare diseases. Previous work has indicated the method may have poorer statistical properties when the sample size is small. Here we investigate small sample properties and implement small sample corrections. We re-sample from a previous trial with sample sizes varying from 30 to 80. We apply the standard binary and augmented binary methods and determine the power, type I error rate, coverage and average confidence interval width for each of the estimators. We implement Firth's adjustment for the binary component models and a small sample variance correction for the generalized estimating equations, applying the small sample adjusted methods to each sub-sample as before for comparison. For the log-odds treatment effect the power of the augmented binary method is 20-55% compared to 12-20% for the standard binary method. Both methods have approximately nominal type I error rates. The difference in response probabilities exhibit similar power but both unadjusted methods demonstrate type I error rates of 6-8%. The small sample corrected methods have approximately nominal type I error rates. On both scales, the reduction in average confidence interval width when using the adjusted augmented binary method is 17-18%. This is equivalent to requiring a 32% smaller sample size to achieve the same statistical power. The augmented binary method with small sample corrections provides a substantial improvement for rare disease trials using composite endpoints. We recommend the use of the method for the primary analysis in relevant rare disease trials. We emphasise that the method should be used alongside other efforts in improving the quality of evidence generated from rare disease trials rather than replace them.
NMR methods for metabolomics of mammalian cell culture bioreactors.
Aranibar, Nelly; Reily, Michael D
2014-01-01
Metabolomics has become an important tool for measuring pools of small molecules in mammalian cell cultures expressing therapeutic proteins. NMR spectroscopy has played an important role, largely because it requires minimal sample preparation, does not require chromatographic separation, and is quantitative. The concentrations of large numbers of small molecules in the extracellular media or within the cells themselves can be measured directly on the culture supernatant and on the supernatant of the lysed cells, respectively, and correlated with endpoints such as titer, cell viability, or glycosylation patterns. The observed changes can be used to generate hypotheses by which these parameters can be optimized. This chapter focuses on the sample preparation, data acquisition, and analysis to get the most out of NMR metabolomics data from CHO cell cultures but could easily be extended to other in vitro culture systems.
NK sensitivity of neuroblastoma cells determined by a highly sensitive coupled luminescent method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogbomo, Henry; Hahn, Anke; Geiler, Janina
2006-01-06
The measurement of natural killer (NK) cells toxicity against tumor or virus-infected cells especially in cases with small blood samples requires highly sensitive methods. Here, a coupled luminescent method (CLM) based on glyceraldehyde-3-phosphate dehydrogenase release from injured target cells was used to evaluate the cytotoxicity of interleukin-2 activated NK cells against neuroblastoma cell lines. In contrast to most other methods, CLM does not require the pretreatment of target cells with labeling substances which could be toxic or radioactive. The effective killing of tumor cells was achieved by low effector/target ratios ranging from 0.5:1 to 4:1. CLM provides highly sensitive, safe,more » and fast procedure for measurement of NK cell activity with small blood samples such as those obtained from pediatric patients.« less
Rosenblum, Michael A; Laan, Mark J van der
2009-01-07
The validity of standard confidence intervals constructed in survey sampling is based on the central limit theorem. For small sample sizes, the central limit theorem may give a poor approximation, resulting in confidence intervals that are misleading. We discuss this issue and propose methods for constructing confidence intervals for the population mean tailored to small sample sizes. We present a simple approach for constructing confidence intervals for the population mean based on tail bounds for the sample mean that are correct for all sample sizes. Bernstein's inequality provides one such tail bound. The resulting confidence intervals have guaranteed coverage probability under much weaker assumptions than are required for standard methods. A drawback of this approach, as we show, is that these confidence intervals are often quite wide. In response to this, we present a method for constructing much narrower confidence intervals, which are better suited for practical applications, and that are still more robust than confidence intervals based on standard methods, when dealing with small sample sizes. We show how to extend our approaches to much more general estimation problems than estimating the sample mean. We describe how these methods can be used to obtain more reliable confidence intervals in survey sampling. As a concrete example, we construct confidence intervals using our methods for the number of violent deaths between March 2003 and July 2006 in Iraq, based on data from the study "Mortality after the 2003 invasion of Iraq: A cross sectional cluster sample survey," by Burnham et al. (2006).
Fong, Erika J.; Huang, Chao; Hamilton, Julie; ...
2015-11-23
Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection,more » system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.« less
Tomographic Imaging of Water Injection and Withdrawal in PEMFC Gas Diffusion Layers
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGill U; Gostick, J. T.; Gunterman, H. P.
2010-06-25
X-ray computed tomography was used to visualize the water configurations inside gas diffusion layers for various applied capillary pressures, corresponding to both water invasion and withdrawal. A specialized sample holder was developed to allow capillary pressure control on the small-scale samples required. Tests were performed on GDL specimens with and without hydrophobic treatments.
Model Choice and Sample Size in Item Response Theory Analysis of Aphasia Tests
ERIC Educational Resources Information Center
Hula, William D.; Fergadiotis, Gerasimos; Martin, Nadine
2012-01-01
Purpose: The purpose of this study was to identify the most appropriate item response theory (IRT) measurement model for aphasia tests requiring 2-choice responses and to determine whether small samples are adequate for estimating such models. Method: Pyramids and Palm Trees (Howard & Patterson, 1992) test data that had been collected from…
Pituitary gland volumes in bipolar disorder.
Clark, Ian A; Mackay, Clare E; Goodwin, Guy M
2014-12-01
Bipolar disorder has been associated with increased Hypothalamic-Pituitary-Adrenal axis function. The mechanism is not well understood, but there may be associated increases in pituitary gland volume (PGV) and these small increases may be functionally significant. However, research investigating PGV in bipolar disorder reports mixed results. The aim of the current study was twofold. First, to assess PGV in two novel samples of patients with bipolar disorder and matched healthy controls. Second, to perform a meta-analysis comparing PGV across a larger sample of patients and matched controls. Sample 1 consisted of 23 established patients and 32 matched controls. Sample 2 consisted of 39 medication-naïve patients and 42 matched controls. PGV was measured on structural MRI scans. Seven further studies were identified comparing PGV between patients and matched controls (total n; 244 patients, 308 controls). Both novel samples showed a small (approximately 20mm(3) or 4%), but non-significant, increase in PGV in patients. Combining the two novel samples showed a significant association of age and PGV. Meta-analysis showed a trend towards a larger pituitary gland in patients (effect size: .23, CI: -.14, .59). While results suggest a possible small difference in pituitary gland volume between patients and matched controls, larger mega-analyses with sample sizes greater even than those used in the current meta-analysis are still required. There is a small but potentially functionally significant increase in PGV in patients with bipolar disorder compared to controls. Results demonstrate the difficulty of finding potentially important but small effects in functional brain disorders. Copyright © 2014 Elsevier B.V. All rights reserved.
The small-scale treatability study sample exemption
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coalgate, J.
1991-01-01
In 1981, the Environmental Protection Agency (EPA) issued an interim final rule that conditionally exempted waste samples collected solely for the purpose of monitoring or testing to determine their characteristics or composition'' from RCRA Subtitle C hazardous waste regulations. This exemption (40 CFR 261.4(d)) apples to the transportation of samples between the generator and testing laboratory, temporary storage of samples at the laboratory prior to and following testing, and storage at a laboratory for specific purposes such as an enforcement action. However, the exclusion did not include large-scale samples used in treatability studies or other testing at pilot plants ormore » other experimental facilities. As a result of comments received by the EPA subsequent to the issuance of the interim final rule, the EPA reopened the comment period on the interim final rule on September 18, 1987, and specifically requested comments on whether or not the sample exclusion should be expanded to include waste samples used in small-scale treatability studies. Almost all responders commented favorably on such a proposal. As a result, the EPA issued a final rule (53 FR 27290, July 19, 1988) conditionally exempting waste samples used in small-scale treatability studies from full regulation under Subtitle C of RCRA. The question of whether or not to extend the exclusion to larger scale as proposed by the Hazardous Waste Treatment Council was deferred until a later date. This information Brief summarizes the requirements of the small-scale treatability exemption.« less
ERIC Educational Resources Information Center
Gatti, Mario; Mereu, Maria Grazia; Tagliaferro, Claudio; Markowitsch, Jorg; Neuberger, Robert
Requirements for vocational skills in the engineering industry in Modena, Italy, and Vienna, Austria, were studied. In Modena, employees of a representative sample of 90 small, medium, and large firms in the mechanical processing, agricultural machinery, and sports car manufacturing sectors were interviewed. In Vienna, data were collected through…
Speil, Sidney
1974-01-01
The problems of quantitating chrysotile in water by fiber count techniques are reviewed briefly and the use of mass quantitation is suggested as a preferable measure. Chrysotile fiber has been found in almost every sample of natural water examined, but generally transmission electron miscroscopy (TEM) is required because of the small diameters involved. The extreme extrapolation required in mathematically converting a few fibers or fiber fragments under the TEM to the fiber content of a liquid sample casts considerable doubt on the validity of numbers used to compare chrysotile contents of different liquids. PMID:4470930
Statistical inference involving binomial and negative binomial parameters.
García-Pérez, Miguel A; Núñez-Antón, Vicente
2009-05-01
Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.
A Naturalistic Study of Driving Behavior in Older Adults and Preclinical Alzheimer Disease.
Babulal, Ganesh M; Stout, Sarah H; Benzinger, Tammie L S; Ott, Brian R; Carr, David B; Webb, Mollie; Traub, Cindy M; Addison, Aaron; Morris, John C; Warren, David K; Roe, Catherine M
2017-01-01
A clinical consequence of symptomatic Alzheimer's disease (AD) is impaired driving performance. However, decline in driving performance may begin in the preclinical stage of AD. We used a naturalistic driving methodology to examine differences in driving behavior over one year in a small sample of cognitively normal older adults with ( n = 10) and without ( n = 10) preclinical AD. As expected with a small sample size, there were no statistically significant differences between the two groups, but older adults with preclinical AD drove less often, were less likely to drive at night, and had fewer aggressive behaviors such as hard braking, speeding, and sudden acceleration. The sample size required to power a larger study to determine differences was calculated.
Single point estimation of phenytoin dosing: a reappraisal.
Koup, J R; Gibaldi, M; Godolphin, W
1981-11-01
A previously proposed method for estimation of phenytoin dosing requirement using a single serum sample obtained 24 hours after intravenous loading dose (18 mg/Kg) has been re-evaluated. Using more realistic values for the volume of distribution of phenytoin (0.4 to 1.2 L/Kg), simulations indicate that the proposed method will fail to consistently predict dosage requirements. Additional simulations indicate that two samples obtained during the 24 hour interval following the iv loading dose could be used to more reliably predict phenytoin dose requirement. Because of the nonlinear relationship which exists between phenytoin dose administration rate (RO) and the mean steady state serum concentration (CSS), small errors in prediction of the required RO result in much larger errors in CSS.
Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B
2018-06-01
The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.
ERIC Educational Resources Information Center
Fiedler, Klaus; Kareev, Yaakov
2006-01-01
Adaptive decision making requires that contingencies between decision options and their relative assets be assessed accurately and quickly. The present research addresses the challenging notion that contingencies may be more visible from small than from large samples of observations. An algorithmic account for such a seemingly paradoxical effect…
ERIC Educational Resources Information Center
Scanlan, Aaron T.; Richter-Stretton, Gina L.; Madueno, Maria C.; Borges, Nattai R.; Fenning, Andrew S.
2017-01-01
Measurement of plasma osmolality (P[subscript osm]) remains popular for assessing hydration status in exercise science. However, a controlled reliability assessment of micro-osmometry using small sample volumes to measure Posm remains to be performed. This study aimed to examine the reliability of a cryoscopic micro-osmometer requiring 15-µL…
Calvano, Cosima Damiana; van der Werf, Inez Dorothé; Palmisano, Francesco; Sabbatini, Luigia
2015-01-01
Direct on-target plate processing of small (ca. 100 μg) fragments of paint samples for MALDI-MS identification of lipid- and protein-based binders is described. Fragments were fixed on a conventional stainless steel target plate by colloidal graphite followed by in situ fast tryptic digestion and matrix addition. The new protocol was first developed on paint replicas composed of chicken egg, collagen, and cow milk mixed with inorganic pigments and then successfully applied on historical paint samples taken from a fifteenth century Italian panel painting. The present work contributes a step forward in the simplification of binder identification in very small paint samples since no conventional solvent extraction is required, speeding up the whole sample preparation to 10 min and reducing lipid/protein loss.
Thermal probe design for Europa sample acquisition
NASA Astrophysics Data System (ADS)
Horne, Mera F.
2018-01-01
The planned lander missions to the surface of Europa will access samples from the subsurface of the ice in a search for signs of life. A small thermal drill (probe) is proposed to meet the sample requirement of the Science Definition Team's (SDT) report for the Europa mission. The probe is 2 cm in diameter and 16 cm in length and is designed to access the subsurface to 10 cm deep and to collect five ice samples of 7 cm3 each, approximately. The energy required to penetrate the top 10 cm of ice in a vacuum is 26 Wh, approximately, and to melt 7 cm3 of ice is 1.2 Wh, approximately. The requirement stated in the SDT report of collecting samples from five different sites can be accommodated with repeated use of the same thermal drill. For smaller sample sizes, a smaller probe of 1.0 cm in diameter with the same length of 16 cm could be utilized that would require approximately 6.4 Wh to penetrate the top 10 cm of ice, and 0.02 Wh to collect 0.1 g of sample. The thermal drill has the advantage of simplicity of design and operations and the ability to penetrate ice over a range of densities and hardness while maintaining sample integrity.
Improved Ambient Pressure Pyroelectric Ion Source
NASA Technical Reports Server (NTRS)
Beegle, Luther W.; Kim, Hugh I.; Kanik, Isik; Ryu, Ernest K.; Beckett, Brett
2011-01-01
The detection of volatile vapors of unknown species in a complex field environment is required in many different applications. Mass spectroscopic techniques require subsystems including an ionization unit and sample transport mechanism. All of these subsystems must have low mass, small volume, low power, and be rugged. A volatile molecular detector, an ambient pressure pyroelectric ion source (APPIS) that met these requirements, was recently reported by Caltech researchers to be used in in situ environments.
Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi
2014-12-08
Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.
Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi
2014-01-01
Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system. PMID:25494350
Wang, Yu Annie; Wu, Di; Auclair, Jared R; Salisbury, Joseph P; Sarin, Richa; Tang, Yang; Mozdzierz, Nicholas J; Shah, Kartik; Zhang, Anna Fan; Wu, Shiaw-Lin; Agar, Jeffery N; Love, J Christopher; Love, Kerry R; Hancock, William S
2017-12-05
With the advent of biosimilars to the U.S. market, it is important to have better analytical tools to ensure product quality from batch to batch. In addition, the recent popularity of using a continuous process for production of biopharmaceuticals, the traditional bottom-up method, alone for product characterization and quality analysis is no longer sufficient. Bottom-up method requires large amounts of material for analysis and is labor-intensive and time-consuming. Additionally, in this analysis, digestion of the protein with enzymes such as trypsin could induce artifacts and modifications which would increase the complexity of the analysis. On the other hand, a top-down method requires a minimum amount of sample and allows for analysis of the intact protein mass and sequence generated from fragmentation within the instrument. However, fragmentation usually occurs at the N-terminal and C-terminal ends of the protein with less internal fragmentation. Herein, we combine the use of the complementary techniques, a top-down and bottom-up method, for the characterization of human growth hormone degradation products. Notably, our approach required small amounts of sample, which is a requirement due to the sample constraints of small scale manufacturing. Using this approach, we were able to characterize various protein variants, including post-translational modifications such as oxidation and deamidation, residual leader sequence, and proteolytic cleavage. Thus, we were able to highlight the complementarity of top-down and bottom-up approaches, which achieved the characterization of a wide range of product variants in samples of human growth hormone secreted from Pichia pastoris.
Wright, Mark H.; Tung, Chih-Wei; Zhao, Keyan; Reynolds, Andy; McCouch, Susan R.; Bustamante, Carlos D.
2010-01-01
Motivation: The development of new high-throughput genotyping products requires a significant investment in testing and training samples to evaluate and optimize the product before it can be used reliably on new samples. One reason for this is current methods for automated calling of genotypes are based on clustering approaches which require a large number of samples to be analyzed simultaneously, or an extensive training dataset to seed clusters. In systems where inbred samples are of primary interest, current clustering approaches perform poorly due to the inability to clearly identify a heterozygote cluster. Results: As part of the development of two custom single nucleotide polymorphism genotyping products for Oryza sativa (domestic rice), we have developed a new genotype calling algorithm called ‘ALCHEMY’ based on statistical modeling of the raw intensity data rather than modelless clustering. A novel feature of the model is the ability to estimate and incorporate inbreeding information on a per sample basis allowing accurate genotyping of both inbred and heterozygous samples even when analyzed simultaneously. Since clustering is not used explicitly, ALCHEMY performs well on small sample sizes with accuracy exceeding 99% with as few as 18 samples. Availability: ALCHEMY is available for both commercial and academic use free of charge and distributed under the GNU General Public License at http://alchemy.sourceforge.net/ Contact: mhw6@cornell.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20926420
NASA Astrophysics Data System (ADS)
Abramovich, N. S.; Kovalev, A. A.; Plyuta, V. Y.
1986-02-01
A computer algorithm has been developed to classify the spectral bands of natural scenes on Earth according to their optical characteristics. The algorithm is written in FORTRAN-IV and can be used in spectral data processing programs requiring small data loads. The spectral classifications of some different types of green vegetable canopies are given in order to illustrate the effectiveness of the algorithm.
NASA Astrophysics Data System (ADS)
Santl, Saso; Carf, Masa; Preseren, Tanja; Jenic, Aljaz
2013-04-01
Water withdrawals and consequently reduction of discharges in river streams for different water uses (hydro power, irrigation, etc.) usually impoverish habitat suitability for naturally present river fish fauna. In Slovenia reduction of suitable habitats resulting from water abstractions frequently impacts local brown trout (Salmo truta) populations. This is the reason for establishment of habitat modeling which can qualitatively and quantitatively support decision making for determination of the environmental flow and other mitigation measures. Paper introduces applied methodology for habitat modeling where input data preparation and elaboration with required accuracy has to be considered. For model development four (4) representative and heterogeneous sampling sites were chosen. Two (2) sampling sections were located within the sections with small hydropower plants and were considered as sections affected by water abstractions. The other two (2) sampling sections were chosen where there are no existing water abstractions. Precise bathymetric mapping for chosen river sections has been performed. Topographic data and series of discharge and water level measurements enabled establishment of calibrated hydraulic models, which provide data on water velocities and depths for analyzed discharges. Brief field measurements were also performed to gather required data on dominant and subdominant substrate size and cover type. Since the accuracy of fish distribution on small scale is very important for habitat modeling, a fish sampling method had to be selected and modified for existing river microhabitats. The brown trout specimen's locations were collected with two (2) different sampling methods. A method of riverbank observation which is suitable for adult fish in pools and a method of electro fishing for locating small fish and fish in riffles or hiding in cover. Ecological and habitat requirements for fish species vary regarding different fish populations as well as eco and hydro morphological types of streams. Therefore, if habitat modeling for brown trout in Slovenia should be applied, it is necessary to determine preference requirements for the locally present brown trout populations. For efficient determination of applied preference functions and linked fuzzy sets/rules, beside expert determination, calibration according to field sampling must also be performed. After this final step a model is prepared for the analysis to support decision making in the field of environmental flow and other mitigation measures determination.
Huang, Zixia; Gallot, Aurore; Lao, Nga T; Puechmaille, Sébastien J; Foley, Nicole M; Jebb, David; Bekaert, Michaël; Teeling, Emma C
2016-01-01
The acquisition of tissue samples from wild populations is a constant challenge in conservation biology, especially for endangered species and protected species where nonlethal sampling is the only option. Whole blood has been suggested as a nonlethal sample type that contains a high percentage of bodywide and genomewide transcripts and therefore can be used to assess the transcriptional status of an individual, and to infer a high percentage of the genome. However, only limited quantities of blood can be nonlethally sampled from small species and it is not known if enough genetic material is contained in only a few drops of blood, which represents the upper limit of sample collection for some small species. In this study, we developed a nonlethal sampling method, the laboratory protocols and a bioinformatic pipeline to sequence and assemble the whole blood transcriptome, using Illumina RNA-Seq, from wild greater mouse-eared bats (Myotis myotis). For optimal results, both ribosomal and globin RNAs must be removed before library construction. Treatment of DNase is recommended but not required enabling the use of smaller amounts of starting RNA. A large proportion of protein-coding genes (61%) in the genome were expressed in the blood transcriptome, comparable to brain (65%), kidney (63%) and liver (58%) transcriptomes, and up to 99% of the mitogenome (excluding D-loop) was recovered in the RNA-Seq data. In conclusion, this nonlethal blood sampling method provides an opportunity for a genomewide transcriptomic study of small, endangered or critically protected species, without sacrificing any individuals. © 2015 John Wiley & Sons Ltd.
Experiments with a small behaviour controlled planetary rover
NASA Technical Reports Server (NTRS)
Miller, David P.; Desai, Rajiv S.; Gat, Erann; Ivlev, Robert; Loch, John
1993-01-01
A series of experiments that were performed on the Rocky 3 robot is described. Rocky 3 is a small autonomous rover capable of navigating through rough outdoor terrain to a predesignated area, searching that area for soft soil, acquiring a soil sample, and depositing the sample in a container at its home base. The robot is programmed according to a reactive behavior control paradigm using the ALFA programming language. This style of programming produces robust autonomous performance while requiring significantly less computational resources than more traditional mobile robot control systems. The code for Rocky 3 runs on an eight bit processor and uses about ten k of memory.
Toward high-resolution NMR spectroscopy of microscopic liquid samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, Mark C.; Mehta, Hardeep S.; Chen, Ying
A longstanding limitation of high-resolution NMR spectroscopy is the requirement for samples to have macroscopic dimensions. Commercial probes, for example, are designed for volumes of at least 5 mL, in spite of decades of work directed toward the goal of miniaturization. Progress in miniaturizing inductive detectors has been limited by a perceived need to meet two technical requirements: (1) minimal separation between the sample and the detector, which is essential for sensitivity, and (2) near-perfect magnetic-field homogeneity at the sample, which is typically needed for spectral resolution. The first of these requirements is real, but the second can be relaxed,more » as we demonstrate here. By using pulse sequences that yield high-resolution spectra in an inhomogeneous field, we eliminate the need for near-perfect field homogeneity and the accompanying requirement for susceptibility matching of microfabricated detector components. With this requirement removed, typical imperfections in microfabricated components can be tolerated, and detector dimensions can be matched to those of the sample, even for samples of volume << 5 uL. Pulse sequences that are robust to field inhomogeneity thus enable small-volume detection with optimal sensitivity. We illustrate the potential of this approach to miniaturization by presenting spectra acquired with a flat-wire detector that can easily be scaled to subnanoliter volumes. In particular, we report high-resolution NMR spectroscopy of an alanine sample of volume 500 pL.« less
Fast Cooling and Vitrification of Aqueous Solutions for Cryopreservation
NASA Astrophysics Data System (ADS)
Warkentin, Matt; Husseini, Naji; Berejnov, Viatcheslav; Thorne, Robert
2006-03-01
In many applications, a small volume of aqueous solution must be cooled at a rate sufficient to produce amorphous solid water. Two prominent examples include flash-freezing of protein crystals for X-ray data collection and freezing of cells (i.e. spermatozoa) for cryopreservation. The cooling rate required to vitrify pure water (˜10^6 K/s) is unattainable for volumes that might contain cells or protein crystals, but the required rate can be reduced by adding cryoprotectants. We report the first measurements of the critical concentration required to produce a vitrified sample as a function of the sample's volume, the cryogen into which the sample is plunged, and the temperature of the cryogen, for a wide range of cryoprotectants. These experiments have broad practical consequences for cryopreservation, and provide insight into the physics of glass formation in aqueous systems.
Decoder calibration with ultra small current sample set for intracortical brain-machine interface
NASA Astrophysics Data System (ADS)
Zhang, Peng; Ma, Xuan; Chen, Luyao; Zhou, Jin; Wang, Changyong; Li, Wei; He, Jiping
2018-04-01
Objective. Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. Approach. Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. Main results. The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. Significance. (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application of intracortical brain-machine interfaces in clinical practice.
Vitamin D receptor gene and osteoporosis - author`s response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Looney, J.E.; Yoon, Hyun Koo; Fischer, M.
1996-04-01
We appreciate the comments of Dr. Nguyen et al. about our recent study, but we disagree with their suggestion that the lack of an association between low bone density and the BB VDR genotype, which we reported, is an artifact generated by the small sample size. Furthermore, our results are consistent with similar conclusions reached by a number of other investigators, as recently reported by Peacock. Peacock states {open_quotes}Taken as a whole, the results of studies outlined ... indicate that VDR alleles, cannot account for the major part of the heritable component of bone density as indicated by Morrison etmore » al.{close_quotes}. The majority of the 17 studies cited in this editorial could not confirm an association between the VDR genotype and the bone phenotype. Surely one cannot criticize this combined work as representing an artifact because of a too small sample size. We do not dispute the suggestion by Nguyen et al. that large sample sizes are required to analyze small biological effects. This is evident in both Peacock`s summary and in their own bone density studies. We did not design our study with a larger sample size because, based on the work of Morrison et al., we had hypothesized a large biological effect; large sample sizes are only needed for small biological effects. 4 refs.« less
OSIRIS-REx Flight Dynamics and Navigation Design
NASA Astrophysics Data System (ADS)
Williams, B.; Antreasian, P.; Carranza, E.; Jackman, C.; Leonard, J.; Nelson, D.; Page, B.; Stanbridge, D.; Wibben, D.; Williams, K.; Moreau, M.; Berry, K.; Getzandanner, K.; Liounis, A.; Mashiku, A.; Highsmith, D.; Sutter, B.; Lauretta, D. S.
2018-06-01
OSIRIS-REx is the first NASA mission to return a sample of an asteroid to Earth. Navigation and flight dynamics for the mission to acquire and return a sample of asteroid 101955 Bennu establish many firsts for space exploration. These include relatively small orbital maneuvers that are precise to ˜1 mm/s, close-up operations in a captured orbit about an asteroid that is small in size and mass, and planning and orbit phasing to revisit the same spot on Bennu in similar lighting conditions. After preliminary surveys and close approach flyovers of Bennu, the sample site will be scientifically characterized and selected. A robotic shock-absorbing arm with an attached sample collection head mounted on the main spacecraft bus acquires the sample, requiring navigation to Bennu's surface. A touch-and-go sample acquisition maneuver will result in the retrieval of at least 60 grams of regolith, and up to several kilograms. The flight activity concludes with a return cruise to Earth and delivery of the sample return capsule (SRC) for landing and sample recovery at the Utah Test and Training Range (UTTR).
Temperature Gradients on the Cell Wall in the Critical Viscosity Experiment
NASA Technical Reports Server (NTRS)
Berg, Robert F.; Moldover, Michael R.
1993-01-01
Because of the diverging susceptibility delta rho/delta Tau near the liquid-vapor critical point, temperature gradients must be kept small to maintain adequate sample homogeneity. In our Science Requirements Document we paid particular attention to radial density gradients caused by equilibration of the xenon sample. Axial density gradients were addressed through the requirement that the cell's copper wall have a gradient less than 22 microK/m. This report re-examines the cell wall's temperature distribution in more detail by estimating all known significant contributions to temperature differences on the cell's wall.
Evaluation of Respondent-Driven Sampling
McCreesh, Nicky; Frost, Simon; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda Ndagire; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G
2012-01-01
Background Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex-workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total-population data. Methods Total-population data on age, tribe, religion, socioeconomic status, sexual activity and HIV status were available on a population of 2402 male household-heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, employing current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). Results We recruited 927 household-heads. Full and small RDS samples were largely representative of the total population, but both samples under-represented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven-sampling statistical-inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven-sampling bootstrap 95% confidence intervals included the population proportion. Conclusions Respondent-driven sampling produced a generally representative sample of this well-connected non-hidden population. However, current respondent-driven-sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience-sampling method, and caution is required when interpreting findings based on the sampling method. PMID:22157309
Investigating Astromaterials Curation Applications for Dexterous Robotic Arms
NASA Technical Reports Server (NTRS)
Snead, C. J.; Jang, J. H.; Cowden, T. R.; McCubbin, F. M.
2018-01-01
The Astromaterials Acquisition and Curation office at NASA Johnson Space Center is currently investigating tools and methods that will enable the curation of future astromaterials collections. Size and temperature constraints for astromaterials to be collected by current and future proposed missions will require the development of new robotic sample and tool handling capabilities. NASA Curation has investigated the application of robot arms in the past, and robotic 3-axis micromanipulators are currently in use for small particle curation in the Stardust and Cosmic Dust laboratories. While 3-axis micromanipulators have been extremely successful for activities involving the transfer of isolated particles in the 5-20 micron range (e.g. from microscope slide to epoxy bullet tip, beryllium SEM disk), their limited ranges of motion and lack of yaw, pitch, and roll degrees of freedom restrict their utility in other applications. For instance, curators removing particles from cosmic dust collectors by hand often employ scooping and rotating motions to successfully free trapped particles from the silicone oil coatings. Similar scooping and rotating motions are also employed when isolating a specific particle of interest from an aliquot of crushed meteorite. While cosmic dust curators have been remarkably successful with these kinds of particle manipulations using handheld tools, operator fatigue limits the number of particles that can be removed during a given extraction session. The challenges for curation of small particles will be exacerbated by mission requirements that samples be processed in N2 sample cabinets (i.e. gloveboxes). We have been investigating the use of compact robot arms to facilitate sample handling within gloveboxes. Six-axis robot arms potentially have applications beyond small particle manipulation. For instance, future sample return missions may involve biologically sensitive astromaterials that can be easily compromised by physical interaction with a curator; other potential future returned samples may require cryogenic curation. Robot arms may be combined with high resolution cameras within a sample cabinet and controlled remotely by curator. Sophisticated robot arm and hand combination systems can be programmed to mimic the movements of a curator wearing a data glove; successful implementation of such a system may ultimately allow a curator to virtually operate in a nitrogen, cryogenic, or biologically sensitive environment with dexterity comparable to that of a curator physically handling samples in a glove box.
Compact ultrahigh vacuum sample environments for x-ray nanobeam diffraction and imaging.
Evans, P G; Chahine, G; Grifone, R; Jacques, V L R; Spalenka, J W; Schülli, T U
2013-11-01
X-ray nanobeams present the opportunity to obtain structural insight in materials with small volumes or nanoscale heterogeneity. The effective spatial resolution of the information derived from nanobeam techniques depends on the stability and precision with which the relative position of the x-ray optics and sample can be controlled. Nanobeam techniques include diffraction, imaging, and coherent scattering, with applications throughout materials science and condensed matter physics. Sample positioning is a significant mechanical challenge for x-ray instrumentation providing vacuum or controlled gas environments at elevated temperatures. Such environments often have masses that are too large for nanopositioners capable of the required positional accuracy of the order of a small fraction of the x-ray spot size. Similarly, the need to place x-ray optics as close as 1 cm to the sample places a constraint on the overall size of the sample environment. We illustrate a solution to the mechanical challenge in which compact ion-pumped ultrahigh vacuum chambers with masses of 1-2 kg are integrated with nanopositioners. The overall size of the environment is sufficiently small to allow their use with zone-plate focusing optics. We describe the design of sample environments for elevated-temperature nanobeam diffraction experiments demonstrate in situ diffraction, reflectivity, and scanning nanobeam imaging of the ripening of Au crystallites on Si substrates.
Compact ultrahigh vacuum sample environments for x-ray nanobeam diffraction and imaging
NASA Astrophysics Data System (ADS)
Evans, P. G.; Chahine, G.; Grifone, R.; Jacques, V. L. R.; Spalenka, J. W.; Schülli, T. U.
2013-11-01
X-ray nanobeams present the opportunity to obtain structural insight in materials with small volumes or nanoscale heterogeneity. The effective spatial resolution of the information derived from nanobeam techniques depends on the stability and precision with which the relative position of the x-ray optics and sample can be controlled. Nanobeam techniques include diffraction, imaging, and coherent scattering, with applications throughout materials science and condensed matter physics. Sample positioning is a significant mechanical challenge for x-ray instrumentation providing vacuum or controlled gas environments at elevated temperatures. Such environments often have masses that are too large for nanopositioners capable of the required positional accuracy of the order of a small fraction of the x-ray spot size. Similarly, the need to place x-ray optics as close as 1 cm to the sample places a constraint on the overall size of the sample environment. We illustrate a solution to the mechanical challenge in which compact ion-pumped ultrahigh vacuum chambers with masses of 1-2 kg are integrated with nanopositioners. The overall size of the environment is sufficiently small to allow their use with zone-plate focusing optics. We describe the design of sample environments for elevated-temperature nanobeam diffraction experiments demonstrate in situ diffraction, reflectivity, and scanning nanobeam imaging of the ripening of Au crystallites on Si substrates.
On the influence of crystal size and wavelength on native SAD phasing.
Liebschner, Dorothee; Yamada, Yusuke; Matsugaki, Naohiro; Senda, Miki; Senda, Toshiya
2016-06-01
Native SAD is an emerging phasing technique that uses the anomalous signal of native heavy atoms to obtain crystallographic phases. The method does not require specific sample preparation to add anomalous scatterers, as the light atoms contained in the native sample are used as marker atoms. The most abundant anomalous scatterer used for native SAD, which is present in almost all proteins, is sulfur. However, the absorption edge of sulfur is at low energy (2.472 keV = 5.016 Å), which makes it challenging to carry out native SAD phasing experiments as most synchrotron beamlines are optimized for shorter wavelength ranges where the anomalous signal of sulfur is weak; for longer wavelengths, which produce larger anomalous differences, the absorption of X-rays by the sample, solvent, loop and surrounding medium (e.g. air) increases tremendously. Therefore, a compromise has to be found between measuring strong anomalous signal and minimizing absorption. It was thus hypothesized that shorter wavelengths should be used for large crystals and longer wavelengths for small crystals, but no thorough experimental analyses have been reported to date. To study the influence of crystal size and wavelength, native SAD experiments were carried out at different wavelengths (1.9 and 2.7 Å with a helium cone; 3.0 and 3.3 Å with a helium chamber) using lysozyme and ferredoxin reductase crystals of various sizes. For the tested crystals, the results suggest that larger sample sizes do not have a detrimental effect on native SAD data and that long wavelengths give a clear advantage with small samples compared with short wavelengths. The resolution dependency of substructure determination was analyzed and showed that high-symmetry crystals with small unit cells require higher resolution for the successful placement of heavy atoms.
Khatkar, Mehar S; Nicholas, Frank W; Collins, Andrew R; Zenger, Kyall R; Cavanagh, Julie A L; Barris, Wes; Schnabel, Robert D; Taylor, Jeremy F; Raadsma, Herman W
2008-04-24
The extent of linkage disequilibrium (LD) within a population determines the number of markers that will be required for successful association mapping and marker-assisted selection. Most studies on LD in cattle reported to date are based on microsatellite markers or small numbers of single nucleotide polymorphisms (SNPs) covering one or only a few chromosomes. This is the first comprehensive study on the extent of LD in cattle by analyzing data on 1,546 Holstein-Friesian bulls genotyped for 15,036 SNP markers covering all regions of all autosomes. Furthermore, most studies in cattle have used relatively small sample sizes and, consequently, may have had biased estimates of measures commonly used to describe LD. We examine minimum sample sizes required to estimate LD without bias and loss in accuracy. Finally, relatively little information is available on comparative LD structures including other mammalian species such as human and mouse, and we compare LD structure in cattle with public-domain data from both human and mouse. We computed three LD estimates, D', Dvol and r2, for 1,566,890 syntenic SNP pairs and a sample of 365,400 non-syntenic pairs. Mean D' is 0.189 among syntenic SNPs, and 0.105 among non-syntenic SNPs; mean r2 is 0.024 among syntenic SNPs and 0.0032 among non-syntenic SNPs. All three measures of LD for syntenic pairs decline with distance; the decline is much steeper for r2 than for D' and Dvol. The value of D' and Dvol are quite similar. Significant LD in cattle extends to 40 kb (when estimated as r2) and 8.2 Mb (when estimated as D'). The mean values for LD at large physical distances are close to those for non-syntenic SNPs. Minor allelic frequency threshold affects the distribution and extent of LD. For unbiased and accurate estimates of LD across marker intervals spanning < 1 kb to > 50 Mb, minimum sample sizes of 400 (for D') and 75 (for r2) are required. The bias due to small samples sizes increases with inter-marker interval. LD in cattle is much less extensive than in a mouse population created from crossing inbred lines, and more extensive than in humans. For association mapping in Holstein-Friesian cattle, for a given design, at least one SNP is required for each 40 kb, giving a total requirement of at least 75,000 SNPs for a low power whole-genome scan (median r2 > 0.19) and up to 300,000 markers at 10 kb intervals for a high power genome scan (median r2 > 0.62). For estimation of LD by D' and Dvol with sufficient precision, a sample size of at least 400 is required, whereas for r2 a minimum sample of 75 is adequate.
Shah, R; Worner, S P; Chapman, R B
2012-10-01
Pesticide resistance monitoring includes resistance detection and subsequent documentation/ measurement. Resistance detection would require at least one (≥1) resistant individual(s) to be present in a sample to initiate management strategies. Resistance documentation, on the other hand, would attempt to get an estimate of the entire population (≥90%) of the resistant individuals. A computer simulation model was used to compare the efficiency of simple random and systematic sampling plans to detect resistant individuals and to document their frequencies when the resistant individuals were randomly or patchily distributed. A patchy dispersion pattern of resistant individuals influenced the sampling efficiency of systematic sampling plans while the efficiency of random sampling was independent of such patchiness. When resistant individuals were randomly distributed, sample sizes required to detect at least one resistant individual (resistance detection) with a probability of 0.95 were 300 (1%) and 50 (10% and 20%); whereas, when resistant individuals were patchily distributed, using systematic sampling, sample sizes required for such detection were 6000 (1%), 600 (10%) and 300 (20%). Sample sizes of 900 and 400 would be required to detect ≥90% of resistant individuals (resistance documentation) with a probability of 0.95 when resistant individuals were randomly dispersed and present at a frequency of 10% and 20%, respectively; whereas, when resistant individuals were patchily distributed, using systematic sampling, a sample size of 3000 and 1500, respectively, was necessary. Small sample sizes either underestimated or overestimated the resistance frequency. A simple random sampling plan is, therefore, recommended for insecticide resistance detection and subsequent documentation.
The feasibility of recharge rate determinations using the steady- state centrifuge method
Nimmo, J.R.; Stonestrom, David A.; Akstin, K.C.
1994-01-01
The establishment of steady unsaturated flow in a centrifuge permits accurate measurement of small values of hydraulic conductivity (K). This method can provide a recharge determination if it is applied to an unsaturated core sample from a depth at which gravity alone drives the flow. A K value determined at the in situ water content indicates the long-term average recharge rate at a point. Tests of this approach have been made at two sites. For sandy core samples a better knowledge of the matric pressure profiles is required before a recharge rate can be determined. Fine-textured cores required new developments of apparatus and procedures, especially for making centrifuge measurements with minimal compaction of the samples. -from Authors
Rock bit requires no flushing medium to maintain drilling speed
NASA Technical Reports Server (NTRS)
1965-01-01
Steel drill bit having terraces of teeth intersected by spiral grooves with teeth permits the boring of small holes through rock with low power. The cuttings are stored in a chamber behind the cutting head. Could be used as sampling device.
Shore, Sabrina; Henderson, Jordana M; Lebedev, Alexandre; Salcedo, Michelle P; Zon, Gerald; McCaffrey, Anton P; Paul, Natasha; Hogrefe, Richard I
2016-01-01
For most sample types, the automation of RNA and DNA sample preparation workflows enables high throughput next-generation sequencing (NGS) library preparation. Greater adoption of small RNA (sRNA) sequencing has been hindered by high sample input requirements and inherent ligation side products formed during library preparation. These side products, known as adapter dimer, are very similar in size to the tagged library. Most sRNA library preparation strategies thus employ a gel purification step to isolate tagged library from adapter dimer contaminants. At very low sample inputs, adapter dimer side products dominate the reaction and limit the sensitivity of this technique. Here we address the need for improved specificity of sRNA library preparation workflows with a novel library preparation approach that uses modified adapters to suppress adapter dimer formation. This workflow allows for lower sample inputs and elimination of the gel purification step, which in turn allows for an automatable sRNA library preparation protocol.
Boyacı, Ezel; Bojko, Barbara; Reyes-Garcés, Nathaly; Poole, Justen J; Gómez-Ríos, Germán Augusto; Teixeira, Alexandre; Nicol, Beate; Pawliszyn, Janusz
2018-01-18
In vitro high-throughput non-depletive quantitation of chemicals in biofluids is of growing interest in many areas. Some of the challenges facing researchers include the limited volume of biofluids, rapid and high-throughput sampling requirements, and the lack of reliable methods. Coupled to the above, growing interest in the monitoring of kinetics and dynamics of miniaturized biosystems has spurred the demand for development of novel and revolutionary methodologies for analysis of biofluids. The applicability of solid-phase microextraction (SPME) is investigated as a potential technology to fulfill the aforementioned requirements. As analytes with sufficient diversity in their physicochemical features, nicotine, N,N-Diethyl-meta-toluamide, and diclofenac were selected as test compounds for the study. The objective was to develop methodologies that would allow repeated non-depletive sampling from 96-well plates, using 100 µL of sample. Initially, thin film-SPME was investigated. Results revealed substantial depletion and consequent disruption in the system. Therefore, new ultra-thin coated fibers were developed. The applicability of this device to the described sampling scenario was tested by determining the protein binding of the analytes. Results showed good agreement with rapid equilibrium dialysis. The presented method allows high-throughput analysis using small volumes, enabling fast reliable free and total concentration determinations without disruption of system equilibrium.
Demitri, Nevine; Zoubir, Abdelhak M
2017-01-01
Glucometers present an important self-monitoring tool for diabetes patients and, therefore, must exhibit high accuracy as well as good usability features. Based on an invasive photometric measurement principle that drastically reduces the volume of the blood sample needed from the patient, we present a framework that is capable of dealing with small blood samples, while maintaining the required accuracy. The framework consists of two major parts: 1) image segmentation; and 2) convergence detection. Step 1 is based on iterative mode-seeking methods to estimate the intensity value of the region of interest. We present several variations of these methods and give theoretical proofs of their convergence. Our approach is able to deal with changes in the number and position of clusters without any prior knowledge. Furthermore, we propose a method based on sparse approximation to decrease the computational load, while maintaining accuracy. Step 2 is achieved by employing temporal tracking and prediction, herewith decreasing the measurement time, and, thus, improving usability. Our framework is tested on several real datasets with different characteristics. We show that we are able to estimate the underlying glucose concentration from much smaller blood samples than is currently state of the art with sufficient accuracy according to the most recent ISO standards and reduce measurement time significantly compared to state-of-the-art methods.
Miyazaki, Tamako; Nakata, Katsushi; Nishimura, Takashi; Abe, Shintaro; Yamashita, Tetsuro; Miyazaki, Masao
2018-02-01
The small Indian mongoose (Herpestes auropunctatus) is an invasive species in Okinawa and Amami-Oshima, Japan. Major strategies for their eradication have been the use of baited traps, which suffer from decreasing efficiency with declining populations and the bycatch of native animals. To address these concerns, mongoose-specific lures are required. In this study, we aimed to identify species- and/or sex-specific compounds from anal sac secretions of small Indian mongooses. Volatile compounds emitted from male and female mongoose anal sac secretions were analyzed by thermal desorption-gas chromatography-mass spectrometry. In addition to several fatty acids, 2-phenylethanol was identified as a minor compound, which is uncommon in mammalian secretions but a dominant odorant in roses. Female samples emitted higher levels of 2-phenylethanol than male samples did. These findings indicate that 2-phenylethanol is a female-specific volatile compound of anal sac secretions in small Indian mongooses, and it may be useful as an ingredient of mongoose-specific scent lures.
Aladdin: Exploration and Sample Return from the Moons of Mars
NASA Technical Reports Server (NTRS)
Pieters, C.; Cheng, A.; Clark, B.; Murchie, S.; Mustard, J.; Zolensky, M.; Papike, J.
2000-01-01
Aladdin is a remote sensing and sample return mission focused on the two small moons of Mars, Phobos and Deimos. Understanding the moons of Mars will help us to understand the early history of Mars itself. Aladdin's primary objective is to acquire well documented, representative samples from both moons and return them to Earth for detailed analyses. Samples arrive at Earth within three years of launch. Aladdin addresses several of NASA's highest priority science objectives: the origin and evolution of the Martian system (one of two silicate planets with satellites) and the composition and nature of small bodies (the building blocks of the solar system). The Aladdin mission has been selected as a finalist in both the 1997 and 1999 Discovery competitions based on the high quality of science it would accomplish. The equivalent of Aladdin's Phase A development has been successfully completed, yielding a high degree of technical maturity. Aladdin uses an innovative flyby sample acquisition method, which has been validated experimentally and does not require soft landing or anchoring. An initial phasing orbit at Mars reduces mission propulsion requirements, enabling Aladdin to use proven, low-risk chemical propulsion with good mass margin. This phasing orbit is followed by a five month elliptical mission during which there are redundant opportunities for acquisition of samples and characterization of their geologic context using remote sensing. The Aladdin mission is a partnership between Brown University, the Johns Hopkins University Applied Physics Laboratory, Lockheed Martin Astronautics, and NASA Johnson Space Center.
The Observational Determination of the Age of the Universe as a Laboratory Exercise.
ERIC Educational Resources Information Center
Cadmus, Robert R., Jr.
1999-01-01
Describes a procedure that allows undergraduate students to determine the approximate age of the universe using their own data. The experiment requires a relatively small telescope with a modest spectrograph. Includes sample data and calculations. (Contains 11 references.) (Author/WRM)
DNA ISOLATION FROM SMALL TISSUE SAMPLES USING SALT AND SPERMINE
Common DNA isolation methods rely upon protein denaturation by organic solvents such as phenol and chloroform. hese solvents pose some risk to the user and require special disposal procedures. e have previously reported a method for isolating DNA from peripheral blood lymphocytes...
Liu, Fang
2016-01-01
In both clinical development and post-marketing of a new therapy or a new treatment, incidence of an adverse event (AE) is always a concern. When sample sizes are small, large sample-based inferential approaches on an AE incidence proportion in a certain time period no longer apply. In this brief discussion, we introduce a simple Bayesian framework to quantify, in small sample studies and the rare AE case, (1) the confidence level that the incidence proportion of a particular AE p is over or below a threshold, (2) the lower or upper bounds on p with a certain level of confidence, and (3) the minimum required number of patients with an AE before we can be certain that p surpasses a specific threshold, or the maximum allowable number of patients with an AE after which we can no longer be certain that p is below a certain threshold, given a certain confidence level. The method is easy to understand and implement; the interpretation of the results is intuitive. This article also demonstrates the usefulness of simple Bayesian concepts when it comes to answering practical questions.
Lorenz, J.J.; McIvor, C.C.; Powell, G.V.N.; Frederick, P.C.
1997-01-01
We describe a 9 m2 drop net and removable walkways designed to quantify densities of small fishes in wetland habitats with low to moderate vegetation density. The method permits the collection of small, quantitative, discrete samples in ecologically sensitive areas by combining rapid net deployment from fixed sites with the carefully contained use of the fish toxicant rotenone. This method requires very little contact with the substrate, causes minimal alteration to the habitat being sampled, samples small fishes in an unbiased manner, and allows for differential sampling of microhabitats within a wetland. When used in dwarf red mangrove (Rhizophora mangle) habitat in southern Everglades National Park and adjacent areas (September 1990 to March 1993), we achieved high recovery efficiencies (78–90%) for five common species <110 mm in length. We captured 20,193 individuals of 26 species. The most abundant fishes were sheepshead minnowCyprinodon variegatus, goldspotted killifishFloridichthys carpio, rainwater killifishLucania parva, sailfin mollyPoecilia latipinna, and the exotic Mayan cichlidCichlasoma urophthalmus. The 9 m2 drop net and associated removable walkways are versatile and can be used in a variety of wetland types, including both interior and coastal wetlands with either herbaceous or woody vegetation.
NASA Astrophysics Data System (ADS)
Santos, Sergio; Barcons, Victor; Christenson, Hugo K.; Billingsley, Daniel J.; Bonass, William A.; Font, Josep; Thomson, Neil H.
2013-08-01
A way to operate fundamental mode amplitude modulation atomic force microscopy is introduced which optimizes stability and resolution for a given tip size and shows negligible tip wear over extended time periods (˜24 h). In small amplitude small set-point (SASS) imaging, the cantilever oscillates with sub-nanometer amplitudes in the proximity of the sample, without the requirement of using large drive forces, as the dynamics smoothly lead the tip to the surface through the water layer. SASS is demonstrated on single molecules of double-stranded DNA in ambient conditions where sharp silicon tips (R ˜ 2-5 nm) can resolve the right-handed double helix.
NASA Astrophysics Data System (ADS)
Raju, C.; Vidya, R.
2017-11-01
Chain Sampling Plan is widely used whenever a small sample attributes plan is required to be used for situations involving destructive products coming out of continuous production process [1, 2]. This paper presents a procedure for the construction and selection of a ChSP-1 by attributes inspection based on membership functions [3]. A procedure using search technique is developed for obtaining the parameters of single sampling plan for a given set of AQL and LQL values. A sample of tables providing ChSP-1 plans for various combinations of AQL and LQL values are presented [4].
Estrella, Ruby P; Whitelock, John M; Roubin, Rebecca H; Packer, Nicolle H; Karlsson, Niclas G
2009-01-01
Structural characterization of oligosaccharides from proteoglycans and other glycoproteins is greatly enhanced through the use of mass spectrometry and gel electrophoresis. Sample preparation for these sensitive techniques often requires enzymatic treatments to produce oligosaccharide sequences for subsequent analysis. This chapter describes several small-scale methods for in-gel, on-blot, and in-solution enzymatic digestions in preparation for graphitized carbon liquid chromatography-mass spectrometry (LC-MS) analysis, with specific applications indicated for glycosaminoglycans (GAGs) and N-linked oligosaccharides. In addition, accompanying procedures for oligosaccharide reduction by sodium borohydride, sample desalting via carbon microcolumn, desialylation by sialidase enzyme treatment, and small-scale oligosaccharide species fractionation are included. Fluorophore-assisted carbohydrate electrophoresis (FACE) is another useful method to isolate derivatized oligosaccharides. Overall, the modularity of these techniques provides ease and flexibility for use in conjunction with mass spectrometric and electrophoretic tools for glycomic research studies.
NASA Astrophysics Data System (ADS)
Yuan, Chao; Chareyre, Bruno; Darve, Félix
2016-09-01
A pore-scale model is introduced for two-phase flow in dense packings of polydisperse spheres. The model is developed as a component of a more general hydromechanical coupling framework based on the discrete element method, which will be elaborated in future papers and will apply to various processes of interest in soil science, in geomechanics and in oil and gas production. Here the emphasis is on the generation of a network of pores mapping the void space between spherical grains, and the definition of local criteria governing the primary drainage process. The pore space is decomposed by Regular Triangulation, from which a set of pores connected by throats are identified. A local entry capillary pressure is evaluated for each throat, based on the balance of capillary pressure and surface tension at equilibrium. The model reflects the possible entrapment of disconnected patches of the receding wetting phase. It is validated by a comparison with drainage experiments. In the last part of the paper, a series of simulations are reported to illustrate size and boundary effects, key questions when studying small samples made of spherical particles be it in simulations or experiments. Repeated tests on samples of different sizes give evolution of water content which are not only scattered but also strongly biased for small sample sizes. More than 20,000 spheres are needed to reduce the bias on saturation below 0.02. Additional statistics are generated by subsampling a large sample of 64,000 spheres. They suggest that the minimal sampling volume for evaluating saturation is one hundred times greater that the sampling volume needed for measuring porosity with the same accuracy. This requirement in terms of sample size induces a need for efficient computer codes. The method described herein has a low algorithmic complexity in order to satisfy this requirement. It will be well suited to further developments toward coupled flow-deformation problems in which evolution of the microstructure require frequent updates of the pore network.
Cappione, Amedeo; Mabuchi, Masaharu; Briggs, David; Nadler, Timothy
2015-04-01
Protein immuno-detection encompasses a broad range of analytical methodologies, including western blotting, flow cytometry, and microscope-based applications. These assays which detect, quantify, and/or localize expression for one or more proteins in complex biological samples, are reliant upon fluorescent or enzyme-tagged target-specific antibodies. While small molecule labeling kits are available with a range of detection moieties, the workflow is hampered by a requirement for multiple dialysis-based buffer exchange steps that are both time-consuming and subject to sample loss. In a previous study, we briefly described an alternative method for small-scale protein labeling with small molecule dyes whereby all phases of the conjugation workflow could be performed in a single centrifugal diafiltration device. Here, we expand on this foundational work addressing functionality of the device at each step in the workflow (sample cleanup, labeling, unbound dye removal, and buffer exchange/concentration) and the implications for optimizing labeling efficiency. When compared to other common buffer exchange methodologies, centrifugal diafiltration offered superior performance as measured by four key parameters (process time, desalting capacity, protein recovery, retain functional integrity). Originally designed for resin-based affinity purification, the device also provides a platform for up-front antibody purification or albumin carrier removal. Most significantly, by exploiting the rapid kinetics of NHS-based labeling reactions, the process of continuous diafiltration minimizes reaction time and long exposure to excess dye, guaranteeing maximal target labeling while limiting the risks associated with over-labeling. Overall, the device offers a simplified workflow with reduced processing time and hands-on requirements, without sacrificing labeling efficiency, final yield, or conjugate performance. Copyright © 2015 Elsevier B.V. All rights reserved.
Toward MRI microimaging of single biological cells
NASA Astrophysics Data System (ADS)
Seeber, Derek Allan
There is a great advantage in signal to noise ratio (SNR) that can be obtained in nuclear magnetic resonance (NMR) on very small samples (having spatial dimensions ˜100 mum or less) if one employs NMR "microcoils" that are of similarly small dimensions. These gains in SNR could enable magnetic resonance imaging (MRI) microscopy with spatial resolutions of ˜1--2 mum, much better than currently available. We report the design and testing of a NMR microcoil receiver apparatus, employing solenoidal microcoils of dimensions of tens to hundreds of microns, using an applied field of 9 Tesla (proton frequency 383 MHz). For the smallest receiver coils we attain sensitivity sufficient to observe proton NMR with SNR one in a single scan applied to ˜10 mum3 (10 fl) water sample, containing 7 x 1011 total proton spins. In addition to the NMR applications, microcoils have been applied to MRI producing images with spatial resolutions as low as 2 mum x 3.5 mum x 14.8 mum on phantom images of rods and beads. This resolution can be further improved. MRI imaging of small sample volumes requires significant hardware modifications and improvements, all of which are discussed. Specifically, MRI microscopy requires very strong (>10 T/m), rapidly switchable triaxial magnetic field gradients. We report the design and construction of such a triaxial gradient system, producing gradient substantially greater than 15 T/m in all three directions, x, y, and z (as high as 50 T/m for the x direction). The gradients are power by a custom designed power supply capable of providing currents in excess of 200 amps and switching times of less than 5 mus corresponding to slew rates of greater that 107 T/m/s. The gradients are adequately uniform (within 5% over a volume of 600 mum3) and sufficient for microcoil MRI of small samples.
NASA Astrophysics Data System (ADS)
Kittell, David E.; Cummock, Nick R.; Son, Steven F.
2016-08-01
Small scale characterization experiments using only 1-5 g of a baseline ammonium nitrate plus fuel oil (ANFO) explosive are discussed and simulated using an ignition and growth reactive flow model. There exists a strong need for the small scale characterization of non-ideal explosives in order to adequately survey the wide parameter space in sample composition, density, and microstructure of these materials. However, it is largely unknown in the scientific community whether any useful or meaningful result may be obtained from detonation failure, and whether a minimum sample size or level of confinement exists for the experiments. In this work, it is shown that the parameters of an ignition and growth rate law may be calibrated using the small scale data, which is obtained from a 35 GHz microwave interferometer. Calibration is feasible when the samples are heavily confined and overdriven; this conclusion is supported with detailed simulation output, including pressure and reaction contours inside the ANFO samples. The resulting shock wave velocity is most likely a combined chemical-mechanical response, and simulations of these experiments require an accurate unreacted equation of state (EOS) in addition to the calibrated reaction rate. Other experiments are proposed to gain further insight into the detonation failure data, as well as to help discriminate between the role of the EOS and reaction rate in predicting the measured outcome.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kittell, David E.; Cummock, Nick R.; Son, Steven F.
2016-08-14
Small scale characterization experiments using only 1–5 g of a baseline ammonium nitrate plus fuel oil (ANFO) explosive are discussed and simulated using an ignition and growth reactive flow model. There exists a strong need for the small scale characterization of non-ideal explosives in order to adequately survey the wide parameter space in sample composition, density, and microstructure of these materials. However, it is largely unknown in the scientific community whether any useful or meaningful result may be obtained from detonation failure, and whether a minimum sample size or level of confinement exists for the experiments. In this work, itmore » is shown that the parameters of an ignition and growth rate law may be calibrated using the small scale data, which is obtained from a 35 GHz microwave interferometer. Calibration is feasible when the samples are heavily confined and overdriven; this conclusion is supported with detailed simulation output, including pressure and reaction contours inside the ANFO samples. The resulting shock wave velocity is most likely a combined chemical-mechanical response, and simulations of these experiments require an accurate unreacted equation of state (EOS) in addition to the calibrated reaction rate. Other experiments are proposed to gain further insight into the detonation failure data, as well as to help discriminate between the role of the EOS and reaction rate in predicting the measured outcome.« less
Kathryn A. Schoenecker; Mary Kay Watry; Laura E. Ellison; Michael K. Schwartz; Gordon L. Luikart
2015-01-01
Conservation of species requires accurate population estimates. We used genetic markers from feces to determine bighorn sheep abundance for a herd that was hypothesized to be declining and in need of population status monitoring. We sampled from a small but accessible portion of the populationâs range where animals naturally congregate at a natural mineral lick to test...
NASA Technical Reports Server (NTRS)
Thomas, Jr., Jess Brooks (Inventor)
1999-01-01
The front end in GPS receivers has the functions of amplifying, down-converting, filtering and sampling the received signals. In the preferred embodiment, only two operations, A/D conversion and a sum, bring the signal from RF to filtered quadrature baseband samples. After amplification and filtering at RF, the L1 and L2 signals are each sampled at RF at a high selected subharmonic rate. The subharmonic sample rates are approximately 900 MHz for L1 and 982 MHz for L2. With the selected subharmonic sampling, the A/D conversion effectively down-converts the signal from RF to quadrature components at baseband. The resulting sample streams for L1 and L2 are each reduced to a lower rate with a digital filter, which becomes a straight sum in the simplest embodiment. The frequency subsystem can be very simple, only requiring the generation of a single reference frequency (e.g. 20.46 MHz minus a small offset) and the simple multiplication of this reference up to the subharmonic sample rates for L1 and L2. The small offset in the reference frequency serves the dual purpose of providing an advantageous offset in the down-converted carrier frequency and in the final baseband sample rate.
Relationship of Complexity Factor Ratings With Operational Errors
2007-05-01
losing information about their interrelationships. Prior to the analysis, the Kaiser-Meyer-Olkin ( KMO ) measure of sampling adequacy was examined to...test whether partial correlations among the variables were small. KMO values of .6 and above are required for a good solution. A KMO of .87 was
Phase-Enhanced 3D Snapshot ISAR Imaging and Interferometric SAR
2009-12-28
generalized technique requires the precession angle 9p be relatively small [see liq. (28)|. However, the noncoherent snapshot image equations remain...valid beyond this precession limit, and the unique sampling grid developed is still very useful for 3D imaging of the noncoherent snapshot equation
Bayes plus Brass: Estimating Total Fertility for Many Small Areas from Sparse Census Data
Schmertmann, Carl P.; Cavenaghi, Suzana M.; Assunção, Renato M.; Potter, Joseph E.
2013-01-01
Small-area fertility estimates are valuable for analysing demographic change, and important for local planning and population projection. In countries lacking complete vital registration, however, small-area estimates are possible only from sparse survey or census data that are potentially unreliable. Such estimation requires new methods for old problems: procedures must be automated if thousands of estimates are required, they must deal with extreme sampling variability in many areas, and they should also incorporate corrections for possible data errors. We present a two-step algorithm for estimating total fertility in such circumstances, and we illustrate by applying the method to 2000 Brazilian Census data for over five thousand municipalities. Our proposed algorithm first smoothes local age-specific rates using Empirical Bayes methods, and then applies a new variant of Brass’s P/F parity correction procedure that is robust under conditions of rapid fertility decline. PMID:24143946
The energy performance of thermochromic glazing
NASA Astrophysics Data System (ADS)
Diamantouros, Pavlos
This study investigated the energy performance of thermochromic glazing. It was done by simulating the model of a small building in a highly advanced computer program (EnergyPlus - U.S. DOE). The physical attributes of the thermochromic samples examined came from actual laboratory samples fabricated in UCL's Department of Chemistry (Prof I. P. Parkin). It was found that they can substantially reduce cooling loads while requiring the same heating loads as a high end low-e double glazing. The reductions in annual cooling energy required were in the 20%-40% range depending on sample, location and building layout. A series of sensitivity analyses showed the importance of switching temperature and emissivity factor in the performance of the glazing. Finally an ideal pane was designed to explore the limits this technology has to offer.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-11
...EPA is issuing a proposed rule to amend the diesel sulfur regulations to allow refiners, importers, distributors, and retailers of highway diesel fuel the option to use an alternative affirmative defense if the Agency finds highway diesel fuel samples above the specified sulfur standard at retail facilities. This rule also proposes to amend the gasoline benzene regulations to allow disqualified small refiners the same opportunity to generate gasoline benzene credits as that afforded to non-small refiners.
A 4-spot time-of-flight anemometer for small centrifugal compressor velocity measurements
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Skoch, Gary J.
1992-01-01
The application of laser anemometry techniques in turbomachinery facilities is a challenging dilemma requiring an anemometer system with special qualities. Here, we describe the use of a novel laser anemometry technique applied to a small 4.5 kg/s, 4:1 pressure ratio centrifugal compressor. Sample velocity profiles across the blade pitch are presented for a single location along the rotor. The results of the intra-blade passage velocity measurements will ultimately be used to verify CFD 3-D viscous code predictions.
A 4-spot time-of-flight anemometer for small centrifugal compressor velocity measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wernet, M.P.; Skoch, G.J.
1992-07-01
The application of laser anemometry techniques in turbomachinery facilities is a challenging dilemma requiring an anemometer system with special qualities. Here, we describe the use of a novel laser anemometry technique applied to a small 4.5 kg/s, 4:1 pressure ratio centrifugal compressor. Sample velocity profiles across the blade pitch are presented for a single location along the rotor. The results of the intra-blade passage velocity measurements will ultimately be used to verify CFD 3-D viscous code predictions.
Status of robotic mission studies for the Space Exploration Initiative - 1991
NASA Technical Reports Server (NTRS)
Bourke, Roger D.; Dias, William C.; Golombek, Matthew P.; Pivirotto, Donna L.; Sturms, Francis M.; Hubbard, G. S.
1991-01-01
Results of studies of robotic missions to the moon and Mars planned under the U.S. Space Exploration Initiative are summarized. First, an overall strategy for small robotic missions to accomplish the information gathering required by human missions is reviewed, and the principal robotic mission requirements are discussed. The discussion covers the following studies: the Lunar Observer, the Mars Environmental Survey mission, Mars Sample Return missions using microtechnology, and payloads.
Asymmetric masks for laboratory-based X-ray phase-contrast imaging with edge illumination.
Endrizzi, Marco; Astolfo, Alberto; Vittoria, Fabio A; Millard, Thomas P; Olivo, Alessandro
2016-05-05
We report on an asymmetric mask concept that enables X-ray phase-contrast imaging without requiring any movement in the system during data acquisition. The method is compatible with laboratory equipment, namely a commercial detector and a rotating anode tube. The only motion required is that of the object under investigation which is scanned through the imaging system. Two proof-of-principle optical elements were designed, fabricated and experimentally tested. Quantitative measurements on samples of known shape and composition were compared to theory with good agreement. The method is capable of measuring the attenuation, refraction and (ultra-small-angle) X-ray scattering, does not have coherence requirements and naturally adapts to all those situations in which the X-ray image is obtained by scanning a sample through the imaging system.
Investigation of the tone-burst tube for duct lining attenuation measurement
NASA Technical Reports Server (NTRS)
Soffel, A. R.; Morrow, P. F.
1972-01-01
The tone burst technique makes practical the laboratory evaluation of potential inlet and discharge duct treatments. Tone burst apparatus requires only simple machined parts and standard components. Small, simply made, lining samples are quickly and easily installed in the system. Two small electromagnetric loudspeaker drivers produce peak sound pressure level of over 166 db in the 3-square-inch sample duct. Air pump available in most laboratories can produce air flows of over plus and minus Mach 0.3 in the sample duct. The technique uses short shaped pulses of sound propagated down a progressive wave tube containing the sample duct. The peak pressure level output of the treated duct is compared with the peak pressure level output of a substituted reference duct. The difference between the levels is the attenuation or insertion loss of the treated duct. Evaluations of resonant absorber linings by the tone burst technique check attenuation values predicted by empirical formulas based on full scale ducts.
Mucosal flora of the small intestine and the effect of preoperative antibiotics.
Elmes, M E; Howells, C H; Lowe, G H
1984-01-01
Samples of mucosa from the small intestines of 100 patients undergoing intestinal surgery were examined bacteriologically. Sixty four patients had received chemotherapy, 12 for more than 24 h before operation. Most of the jejunal samples were sterile unless there was a carcinoma, previous surgery, or potential intestinal stasis. Ileal mucosa was more likely to contain intestinal organisms. Most of the strains isolated were sensitive in vitro to the antibiotics given in vivo, but short term treatment may not have allowed sufficient time for the treatment to have become effective. The findings suggest that antibiotics are not needed for most operations on the duodenum or jejunum but may be required for operations on the ileum. PMID:6501588
76 FR 10008 - North Pacific Fishery Management Council; Public Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-23
... electronic monitoring system design for less than 60 foot vessels that are included under the observer restructuring action. Electronic monitoring may be a potential alternative to an observer for some small vessels that will be subject to sampling and monitoring requirements under the new observer restructuring...
Development of a robust field technique to quantify the air-void distribution in fresh concrete.
DOT National Transportation Integrated Search
2013-07-01
In order to make concrete frost durable it is common to provide a small and well distributed air void system. Current measuring techniques require weeks to complete on hardened and polished samples of concrete. This report presents the results of a n...
Applying information theory to small groups assessment: emotions and well-being at work.
García-Izquierdo, Antonio León; Moreno, Blanca; García-Izquierdo, Mariano
2010-05-01
This paper explores and analyzes the relations between emotions and well-being in a sample of aviation personnel, passenger crew (flight attendants). There is an increasing interest in studying the influence of emotions and its role as psychosocial factors in the work environment as they are able to act as facilitators or shock absorbers. The contrast of the theoretical models by using traditional parametric techniques requires a large sample size to the efficient estimation of the coefficients that quantify the relations between variables. Since the available sample that we have is small, the most common size in European enterprises, we used the maximum entropy principle to explore the emotions that are involved in the psychosocial risks. The analyses show that this method takes advantage of the limited information available and guarantee an optimal estimation, the results of which are coherent with theoretical models and numerous empirical researches about emotions and well-being.
Differential sampling for fast frequency acquisition via adaptive extended least squares algorithm
NASA Technical Reports Server (NTRS)
Kumar, Rajendra
1987-01-01
This paper presents a differential signal model along with appropriate sampling techinques for least squares estimation of the frequency and frequency derivatives and possibly the phase and amplitude of a sinusoid received in the presence of noise. The proposed algorithm is recursive in mesurements and thus the computational requirement increases only linearly with the number of measurements. The dimension of the state vector in the proposed algorithm does not depend upon the number of measurements and is quite small, typically around four. This is an advantage when compared to previous algorithms wherein the dimension of the state vector increases monotonically with the product of the frequency uncertainty and the observation period. Such a computational simplification may possibly result in some loss of optimality. However, by applying the sampling techniques of the paper such a possible loss in optimality can made small.
An opportunity cost approach to sample size calculation in cost-effectiveness analysis.
Gafni, A; Walter, S D; Birch, S; Sendi, P
2008-01-01
The inclusion of economic evaluations as part of clinical trials has led to concerns about the adequacy of trial sample size to support such analysis. The analytical tool of cost-effectiveness analysis is the incremental cost-effectiveness ratio (ICER), which is compared with a threshold value (lambda) as a method to determine the efficiency of a health-care intervention. Accordingly, many of the methods suggested to calculating the sample size requirements for the economic component of clinical trials are based on the properties of the ICER. However, use of the ICER and a threshold value as a basis for determining efficiency has been shown to be inconsistent with the economic concept of opportunity cost. As a result, the validity of the ICER-based approaches to sample size calculations can be challenged. Alternative methods for determining improvements in efficiency have been presented in the literature that does not depend upon ICER values. In this paper, we develop an opportunity cost approach to calculating sample size for economic evaluations alongside clinical trials, and illustrate the approach using a numerical example. We compare the sample size requirement of the opportunity cost method with the ICER threshold method. In general, either method may yield the larger required sample size. However, the opportunity cost approach, although simple to use, has additional data requirements. We believe that the additional data requirements represent a small price to pay for being able to perform an analysis consistent with both concept of opportunity cost and the problem faced by decision makers. Copyright (c) 2007 John Wiley & Sons, Ltd.
On the analysis of time-of-flight spin-echo modulated dark-field imaging data
NASA Astrophysics Data System (ADS)
Sales, Morten; Plomp, Jeroen; Bouwman, Wim G.; Tremsin, Anton S.; Habicht, Klaus; Strobl, Markus
2017-06-01
Spin-Echo Modulated Small Angle Neutron Scattering with spatial resolution, i.e. quantitative Spin-Echo Dark Field Imaging, is an emerging technique coupling neutron imaging with spatially resolved quantitative small angle scattering information. However, the currently achieved relatively large modulation periods of the order of millimeters are superimposed to the images of the samples. So far this required an independent reduction and analyses of the image and scattering information encoded in the measured data and is involving extensive curve fitting routines. Apart from requiring a priori decisions potentially limiting the information content that is extractable also a straightforward judgment of the data quality and information content is hindered. In contrast we propose a significantly simplified routine directly applied to the measured data, which does not only allow an immediate first assessment of data quality and delaying decisions on potentially information content limiting further reduction steps to a later and better informed state, but also, as results suggest, generally better analyses. In addition the method enables to drop the spatial resolution detector requirement for non-spatially resolved Spin-Echo Modulated Small Angle Neutron Scattering.
Aloisio, Michelangelo; Bortot, Barbara; Gandin, Ilaria; Severini, Giovanni Maria; Athanasakis, Emmanouil
2017-02-01
Chimerism status evaluation of post-allogeneic hematopoietic stem cell transplantation samples is essential to predict post-transplant relapse. The most commonly used technique capable of detecting small increments of chimerism is quantitative real-time PCR. Although this method is already used in several laboratories, previously described protocols often lack sensitivity and the amount of the DNA required for each chimerism analysis is too high. In the present study, we compared a novel semi-nested allele-specific real-time PCR (sNAS-qPCR) protocol with our in-house standard allele-specific real-time PCR (gAS-qPCR) protocol. We selected two genetic markers and analyzed technical parameters (slope, y-intercept, R2, and standard deviation) useful to determine the performances of the two protocols. The sNAS-qPCR protocol showed better sensitivity and precision. Moreover, the sNAS-qPCR protocol requires, as input, only 10 ng of DNA, which is at least 10-fold less than the gAS-qPCR protocols described in the literature. Finally, the proposed sNAS-qPCR protocol could prove very useful for performing chimerism analysis with a small amount of DNA, as in the case of blood cell subsets.
Torres, Leticia; Liu, Yue; Guitreau, Amy; Yang, Huiping; Tiersch, Terrence R
2017-12-01
Quality control (QC) is essential for reproducible and efficient functioning of germplasm repositories. However, many biomedical fish models present significant QC challenges due to small body sizes (<5 cm) and miniscule sperm volumes (<5 μL). Using minimal volumes of sperm, we used Zebrafish to evaluate common QC endpoints as surrogates for fertilization success along sequential steps of cryopreservation. First, concentrations of calibration bead suspensions were evaluated with a Makler ® counting chamber by using different sample volumes and mixing methods. For sperm analysis, samples were initially diluted at a 1:30 ratio with Hanks' balanced salt solution (HBSS). Motility was evaluated by using different ratios of sperm and activation medium, and membrane integrity was analyzed with flow cytometry at different concentrations. Concentration and sperm motility could be confidently estimated by using volumes as small as 1 μL, whereas membrane integrity required a minimum of 2 μL (at 1 × 10 6 cells/mL). Thus, <5 μL of sperm suspension (after dilution to 30-150 μL with HBSS) was required to evaluate sperm quality by using three endpoints. Sperm quality assessment using a combination of complementary endpoints enhances QC efforts during cryopreservation, increasing reliability and reproducibility, and reducing waste of time and resources.
Data acquisition channel apparatus
NASA Astrophysics Data System (ADS)
Higgins, C. H.; Skipper, J. D.
1985-10-01
Dicussed is a hybrid integrated circuit data acquisition channel apparatus employing an operational amplifier fed by a low current differential bipolar transistor preamplifier having separate feedback gain and signal gain determining elements and providing an amplified signal output to a sample and hold and analog-to-digital converter circuits. The disclosed apparatus operates with low energy and small space requirements and is capable of operations without the sample and hold circuit where the nature of the applied input signal permits.
Small Sample Sizes Yield Biased Allometric Equations in Temperate Forests
Duncanson, L.; Rourke, O.; Dubayah, R.
2015-01-01
Accurate quantification of forest carbon stocks is required for constraining the global carbon cycle and its impacts on climate. The accuracies of forest biomass maps are inherently dependent on the accuracy of the field biomass estimates used to calibrate models, which are generated with allometric equations. Here, we provide a quantitative assessment of the sensitivity of allometric parameters to sample size in temperate forests, focusing on the allometric relationship between tree height and crown radius. We use LiDAR remote sensing to isolate between 10,000 to more than 1,000,000 tree height and crown radius measurements per site in six U.S. forests. We find that fitted allometric parameters are highly sensitive to sample size, producing systematic overestimates of height. We extend our analysis to biomass through the application of empirical relationships from the literature, and show that given the small sample sizes used in common allometric equations for biomass, the average site-level biomass bias is ~+70% with a standard deviation of 71%, ranging from −4% to +193%. These findings underscore the importance of increasing the sample sizes used for allometric equation generation. PMID:26598233
Selective Distance-Based K+ Quantification on Paper-Based Microfluidics.
Gerold, Chase T; Bakker, Eric; Henry, Charles S
2018-04-03
In this study, paper-based microfluidic devices (μPADs) capable of K + quantification in aqueous samples, as well as in human serum, using both colorimetric and distance-based methods are described. A lipophilic phase containing potassium ionophore I (valinomycin) was utilized to achieve highly selective quantification of K + in the presence of Na + , Li + , and Mg 2+ ions. Successful addition of a suspended lipophilic phase to a wax printed paper-based device is described and offers a solution to current approaches that rely on organic solvents, which damage wax barriers. The approach provides an avenue for future alkali/alkaline quantification utilizing μPADs. Colorimetric spot tests allowed for K + quantification from 0.1-5.0 mM using only 3.00 μL of sample solution. Selective distance-based quantification required small sample volumes (6.00 μL) and gave responses sensitive enough to distinguish between 1.0 and 2.5 mM of sample K + . μPADs using distance-based methods were also capable of differentiating between 4.3 and 6.9 mM K + in human serum samples. Distance-based methods required no digital analysis, electronic hardware, or pumps; any steps required for quantification could be carried out using the naked eye.
Fukuda, Masakazu; Watanabe, Atsushi; Hayasaka, Akira; Muraoka, Masaru; Hori, Yuji; Yamazaki, Tadao; Imaeda, Yoshimi; Koga, Akiko
2017-03-01
In this study, we investigated the concentration range in which self-association starts to form in humanized IgG monoclonal antibody (mAb) solutions. Furthermore, on the basis of the results, we developed a practical method of screening for low-viscosity antibody solutions by using small-angle X-ray scattering (SAXS) measurements utilizing small quantities of samples. With lower-viscosity mAb3, self-association was not detected in the range of 1-80mg/mL. With higher-viscosity mAb1, on the other hand, self-association was detected in the range of 10-20mg/mL and was clearly enhanced by a decrease in temperature. The viscosities of mAb solutions at 160, 180, and 200mg/mL at 25°C quantitatively correlated very well with the particle size parameters obtained by SAXS measurements of mAb solutions at 15mg/mL at 5°C. The quantity of mAb sample required for the SAXS measurements was only 0.15mg, which is about one-hundredth of that required for actual viscosity measurements at a high concentration, and such quantities could be available even at an early stage of development. In conclusion, the SAXS analysis method proposed in this study is a valuable tool for the development of concentrated mAb therapeutics with high manufacturability and high usability for subcutaneous injection. Copyright © 2016 Elsevier B.V. All rights reserved.
Detection of an amphiphilic biosample in a paper microchannel based on length.
Chen, Yu-Tzu; Yang, Jing-Tang
2015-01-01
We developed a simple method to achieve semiquantitative detection of an amphiphilic biosample through measuring the length of flow on a microfluidic analytical device (μPAD) based on paper. When an amphiphilic sample was dripped into a straight microchannel defined with a printed wax barrier (hydrophobic) on filter paper (hydrophilic), the length of flow was affected by the reciprocal effect between the sample, the filter-paper channel and the wax barrier. The flow length decreased with increasing concentration of an amphiphilic sample because of adsorption of the sample on the hydrophobic barrier. Measurement of the flow length enabled a determination of the concentration of the amphiphilic sample. The several tested samples included surfactants (Tween 20 and Triton X-100), oligonucleotides (DNA), bovine serum albumin (BSA), human albumin, nitrite, glucose and low-density lipoprotein (LDL). The results show that the measurement of the flow length determined directly the concentration of an amphiphilic sample, whereas a non-amphiphilic sample was not amenable to this method. The proposed method features the advantages of small cost, simplicity, convenience, directness, rapidity (<5 min) and requirement of only a small volume (5 μL) of sample, with prospective applications in developing areas and sites near patients for testing at a point of care (POCT).
Invited Review Small is beautiful: The analysis of nanogram-sized astromaterials
NASA Astrophysics Data System (ADS)
Zolensky, M. E.; Pieters, C.; Clark, B.; Papike, J. J.
2000-01-01
The capability of modern methods to characterize ultra-small samples is well established from analysis of interplanetary dust particles (IDPs), interstellar grains recovered from meteorites, and other materials requiring ultra-sensitive analytical capabilities. Powerful analytical techniques are available that require, under favorable circumstances, single particles of only a few nanograms for entire suites of fairly comprehensive characterizations. A returned sample of >1,000 particles with total mass of just one microgram permits comprehensive quantitative geochemical measurements that are impractical to carry out in situ by flight instruments. The main goal of this paper is to describe the state-of-the-art in microanalysis of astromaterials. Given that we can analyze fantastically small quantities of asteroids and comets, etc., we have to ask ourselves how representative are microscopic samples of bodies that measure a few to many km across? With the Galileo flybys of Gaspra and Ida, it is now recognized that even very small airless bodies have indeed developed a particulate regolith. Acquiring a sample of the bulk regolith, a simple sampling strategy, provides two critical pieces of information about the body. Regolith samples are excellent bulk samples since they normally contain all the key components of the local environment, albeit in particulate form. Furthermore, since this fine fraction dominates remote measurements, regolith samples also provide information about surface alteration processes and are a key link to remote sensing of other bodies. Studies indicate that a statistically significant number of nanogram-sized particles should be able to characterize the regolith of a primitive asteroid, although the presence of larger components within even primitive meteorites (e.g.. Murchison), e.g. chondrules, CAI, large crystal fragments, etc., points out the limitations of using data obtained from nanogram-sized samples to characterize entire primitive asteroids. However, most important asteroidal geological processes have left their mark on the matrix, since this is the finest-grained portion and therefore most sensitive to chemical and physical changes. Thus, the following information can be learned from this fine grain size fraction alone: (1) mineral paragenesis; (2) regolith processes, (3) bulk composition; (4) conditions of thermal and aqueous alteration (if any); (5) relationships to planets, comets, meteorites (via isotopic analyses, including oxygen; (6) abundance of water and hydrated material; (7) abundance of organics; (8) history of volatile mobility, (9) presence and origin of presolar and/or interstellar material. Most of this information can even be obtained from dust samples from bodies for which nanogram-sized samples are not truly representative. Future advances in sensitivity and accuracy of laboratory analytical techniques can be expected to enhance the science value of nano- to microgram sized samples even further. This highlights a key advantage of sample returns - that the most advanced analysis techniques can always be applied in the laboratory, and that well-preserved samples are available for future investigations.
Satellite orbit and data sampling requirements
NASA Technical Reports Server (NTRS)
Rossow, William
1993-01-01
Climate forcings and feedbacks vary over a wide range of time and space scales. The operation of non-linear feedbacks can couple variations at widely separated time and space scales and cause climatological phenomena to be intermittent. Consequently, monitoring of global, decadal changes in climate requires global observations that cover the whole range of space-time scales and are continuous over several decades. The sampling of smaller space-time scales must have sufficient statistical accuracy to measure the small changes in the forcings and feedbacks anticipated in the next few decades, while continuity of measurements is crucial for unambiguous interpretation of climate change. Shorter records of monthly and regional (500-1000 km) measurements with similar accuracies can also provide valuable information about climate processes, when 'natural experiments' such as large volcanic eruptions or El Ninos occur. In this section existing satellite datasets and climate model simulations are used to test the satellite orbits and sampling required to achieve accurate measurements of changes in forcings and feedbacks at monthly frequency and 1000 km (regional) scale.
Mine, Madisa; Nkoane, Tapologo; Sebetso, Gaseene; Sakyi, Bright; Makhaola, Kgomotso; Gaolathe, Tendani
2013-12-01
The sample requirement of 1 mL for the Roche COBAS AmpliPrep/COBAS TaqMan HIV-1 test, version 2.0 (CAP CTM HIV v2.0) limits its utility in measuring plasma HIV-1 RNA levels for small volume samples from children infected with HIV-1. Viral load monitoring is the standard of care for HIV-1-infected patients on antiretroviral therapy in Botswana. The study aimed to validate the dilution of small volume samples with phosphate buffered saline (1× PBS) when quantifying HIV-1 RNA in patient plasma. HIV RNA concentrations were determined in undiluted and diluted pairs of samples comprising panels of quality assessment standards (n=52) as well as patient samples (n=325). There was strong correlation (R(2)) of 0.98 and 0.95 within the dynamic range of the CAP CTM HIV v2.0 test between undiluted and diluted samples from quality assessment standards and patients, respectively. The difference between viral load measurements of diluted and undiluted pairs of quality assessment standards and patient samples using the Altman-Bland test showed that the 95% limits of agreement were between -0.40 Log 10 and 0.49 Log 10. This difference was within the 0.5 Log 10 which is generally considered as normal assay variation of plasma RNA levels. Dilution of samples with 1× PBS produced comparable viral load measurements to undiluted samples. Copyright © 2013 Elsevier B.V. All rights reserved.
Duff, Anthony P.; Durand, Dominique; Gabel, Frank; Hendrickson, Wayne A.; Hura, Greg L.; Jacques, David A.; Kirby, Nigel M.; Kwan, Ann H.; Pérez, Javier; Pollack, Lois; Ryan, Timothy M.; Sali, Andrej; Schneidman-Duhovny, Dina; Vachette, Patrice; Westbrook, John
2017-01-01
In 2012, preliminary guidelines were published addressing sample quality, data acquisition and reduction, presentation of scattering data and validation, and modelling for biomolecular small-angle scattering (SAS) experiments. Biomolecular SAS has since continued to grow and authors have increasingly adopted the preliminary guidelines. In parallel, integrative/hybrid determination of biomolecular structures is a rapidly growing field that is expanding the scope of structural biology. For SAS to contribute maximally to this field, it is essential to ensure open access to the information required for evaluation of the quality of SAS samples and data, as well as the validity of SAS-based structural models. To this end, the preliminary guidelines for data presentation in a publication are reviewed and updated, and the deposition of data and associated models in a public archive is recommended. These guidelines and recommendations have been prepared in consultation with the members of the International Union of Crystallography (IUCr) Small-Angle Scattering and Journals Commissions, the Worldwide Protein Data Bank (wwPDB) Small-Angle Scattering Validation Task Force and additional experts in the field. PMID:28876235
Sample size calculation for stepped wedge and other longitudinal cluster randomised trials.
Hooper, Richard; Teerenstra, Steven; de Hoop, Esther; Eldridge, Sandra
2016-11-20
The sample size required for a cluster randomised trial is inflated compared with an individually randomised trial because outcomes of participants from the same cluster are correlated. Sample size calculations for longitudinal cluster randomised trials (including stepped wedge trials) need to take account of at least two levels of clustering: the clusters themselves and times within clusters. We derive formulae for sample size for repeated cross-section and closed cohort cluster randomised trials with normally distributed outcome measures, under a multilevel model allowing for variation between clusters and between times within clusters. Our formulae agree with those previously described for special cases such as crossover and analysis of covariance designs, although simulation suggests that the formulae could underestimate required sample size when the number of clusters is small. Whether using a formula or simulation, a sample size calculation requires estimates of nuisance parameters, which in our model include the intracluster correlation, cluster autocorrelation, and individual autocorrelation. A cluster autocorrelation less than 1 reflects a situation where individuals sampled from the same cluster at different times have less correlated outcomes than individuals sampled from the same cluster at the same time. Nuisance parameters could be estimated from time series obtained in similarly clustered settings with the same outcome measure, using analysis of variance to estimate variance components. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Vogel, Michael W; Giorni, Andrea; Vegh, Viktor; Pellicer-Guridi, Ruben; Reutens, David C
2016-01-01
We studied the feasibility of generating the variable magnetic fields required for ultra-low field nuclear magnetic resonance relaxometry with dynamically adjustable permanent magnets. Our motivation was to substitute traditional electromagnets by distributed permanent magnets, increasing system portability. The finite element method (COMSOL®) was employed for the numerical study of a small permanent magnet array to calculate achievable magnetic field strength, homogeneity, switching time and magnetic forces. A manually operated prototype was simulated and constructed to validate the numerical approach and to verify the generated magnetic field. A concentric small permanent magnet array can be used to generate strong sample pre-polarisation and variable measurement fields for ultra-low field relaxometry via simple prescribed magnet rotations. Using the array, it is possible to achieve a pre-polarisation field strength above 100 mT and variable measurement fields ranging from 20-50 μT with 200 ppm absolute field homogeneity within a field-of-view of 5 x 5 x 5 cubic centimetres. A dynamic small permanent magnet array can generate multiple highly homogeneous magnetic fields required in ultra-low field nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI) instruments. This design can significantly reduce the volume and energy requirements of traditional systems based on electromagnets, improving portability considerably.
NASA Astrophysics Data System (ADS)
Bozorgzadeh, Nezam; Yanagimura, Yoko; Harrison, John P.
2017-12-01
The Hoek-Brown empirical strength criterion for intact rock is widely used as the basis for estimating the strength of rock masses. Estimations of the intact rock H-B parameters, namely the empirical constant m and the uniaxial compressive strength σc, are commonly obtained by fitting the criterion to triaxial strength data sets of small sample size. This paper investigates how such small sample sizes affect the uncertainty associated with the H-B parameter estimations. We use Monte Carlo (MC) simulation to generate data sets of different sizes and different combinations of H-B parameters, and then investigate the uncertainty in H-B parameters estimated from these limited data sets. We show that the uncertainties depend not only on the level of variability but also on the particular combination of parameters being investigated. As particular combinations of H-B parameters can informally be considered to represent specific rock types, we discuss that as the minimum number of required samples depends on rock type it should correspond to some acceptable level of uncertainty in the estimations. Also, a comparison of the results from our analysis with actual rock strength data shows that the probability of obtaining reliable strength parameter estimations using small samples may be very low. We further discuss the impact of this on ongoing implementation of reliability-based design protocols and conclude with suggestions for improvements in this respect.
Small-angle solution scattering using the mixed-mode pixel array detector.
Koerner, Lucas J; Gillilan, Richard E; Green, Katherine S; Wang, Suntao; Gruner, Sol M
2011-03-01
Solution small-angle X-ray scattering (SAXS) measurements were obtained using a 128 × 128 pixel X-ray mixed-mode pixel array detector (MMPAD) with an 860 µs readout time. The MMPAD offers advantages for SAXS experiments: a pixel full-well of >2 × 10(7) 10 keV X-rays, a maximum flux rate of 10(8) X-rays pixel(-1) s(-1), and a sub-pixel point-spread function. Data from the MMPAD were quantitatively compared with data from a charge-coupled device (CCD) fiber-optically coupled to a phosphor screen. MMPAD solution SAXS data from lysozyme solutions were of equal or better quality than data captured by the CCD. The read-noise (normalized by pixel area) of the MMPAD was less than that of the CCD by an average factor of 3.0. Short sample-to-detector distances were required owing to the small MMPAD area (19.2 mm × 19.2 mm), and were revealed to be advantageous with respect to detector read-noise. As predicted by the Shannon sampling theory and confirmed by the acquisition of lysozyme solution SAXS curves, the MMPAD at short distances is capable of sufficiently sampling a solution SAXS curve for protein shape analysis. The readout speed of the MMPAD was demonstrated by continuously monitoring lysozyme sample evolution as radiation damage accumulated. These experiments prove that a small suitably configured MMPAD is appropriate for time-resolved solution scattering measurements.
Small-angle solution scattering using the mixed-mode pixel array detector
Koerner, Lucas J.; Gillilan, Richard E.; Green, Katherine S.; Wang, Suntao; Gruner, Sol M.
2011-01-01
Solution small-angle X-ray scattering (SAXS) measurements were obtained using a 128 × 128 pixel X-ray mixed-mode pixel array detector (MMPAD) with an 860 µs readout time. The MMPAD offers advantages for SAXS experiments: a pixel full-well of >2 × 107 10 keV X-rays, a maximum flux rate of 108 X-rays pixel−1 s−1, and a sub-pixel point-spread function. Data from the MMPAD were quantitatively compared with data from a charge-coupled device (CCD) fiber-optically coupled to a phosphor screen. MMPAD solution SAXS data from lysozyme solutions were of equal or better quality than data captured by the CCD. The read-noise (normalized by pixel area) of the MMPAD was less than that of the CCD by an average factor of 3.0. Short sample-to-detector distances were required owing to the small MMPAD area (19.2 mm × 19.2 mm), and were revealed to be advantageous with respect to detector read-noise. As predicted by the Shannon sampling theory and confirmed by the acquisition of lysozyme solution SAXS curves, the MMPAD at short distances is capable of sufficiently sampling a solution SAXS curve for protein shape analysis. The readout speed of the MMPAD was demonstrated by continuously monitoring lysozyme sample evolution as radiation damage accumulated. These experiments prove that a small suitably configured MMPAD is appropriate for time-resolved solution scattering measurements. PMID:21335900
NASA Technical Reports Server (NTRS)
Morgera, S. D.; Cooper, D. B.
1976-01-01
The experimental observation that a surprisingly small sample size vis-a-vis dimension is needed to achieve good signal-to-interference ratio (SIR) performance with an adaptive predetection filter is explained. The adaptive filter requires estimates as obtained by a recursive stochastic algorithm of the inverse of the filter input data covariance matrix. The SIR performance with sample size is compared for the situations where the covariance matrix estimates are of unstructured (generalized) form and of structured (finite Toeplitz) form; the latter case is consistent with weak stationarity of the input data stochastic process.
Touch and Go Surface Sampler (TGSS)
NASA Technical Reports Server (NTRS)
Gorevan, S. P.; Rafeek, S.
2001-01-01
The Touch and Go Surface Sampler (TGSS) is a new class of planetary and small body sample acquisition tool that can be used for the surface exploration of Europa, Titan and comets. TGSS in its basic configuration consists of a high speed sampling head attached to the end of a flexible shaft. The sampling head consists of counter rotating cutters that rotates at speeds of 3000 to 15000 RPM. The attractive feature of this if touch and go type sampler is that there are no requirements for a lander type spacecraft. Additional information is contained in the original extended abstract.
How many stakes are required to measure the mass balance of a glacier?
Fountain, A.G.; Vecchia, A.
1999-01-01
Glacier mass balance is estimated for South Cascade Glacier and Maclure Glacier using a one-dimensional regression of mass balance with altitude as an alternative to the traditional approach of contouring mass balance values. One attractive feature of regression is that it can be applied to sparse data sets where contouring is not possible and can provide an objective error of the resulting estimate. Regression methods yielded mass balance values equivalent to contouring methods. The effect of the number of mass balance measurements on the final value for the glacier showed that sample sizes as small as five stakes provided reasonable estimates, although the error estimates were greater than for larger sample sizes. Different spatial patterns of measurement locations showed no appreciable influence on the final value as long as different surface altitudes were intermittently sampled over the altitude range of the glacier. Two different regression equations were examined, a quadratic, and a piecewise linear spline, and comparison of results showed little sensitivity to the type of equation. These results point to the dominant effect of the gradient of mass balance with altitude of alpine glaciers compared to transverse variations. The number of mass balance measurements required to determine the glacier balance appears to be scale invariant for small glaciers and five to ten stakes are sufficient.
A Database Design and Development Case: Home Theater Video
ERIC Educational Resources Information Center
Ballenger, Robert; Pratt, Renee
2012-01-01
This case consists of a business scenario of a small video rental store, Home Theater Video, which provides background information, a description of the functional business requirements, and sample data. The case provides sufficient information to design and develop a moderately complex database to assist Home Theater Video in solving their…
The Cocoa Shop: A Database Management Case
ERIC Educational Resources Information Center
Pratt, Renée M. E.; Smatt, Cindi T.
2015-01-01
This is an example of a real-world applicable case study, which includes background information on a small local business (i.e., TCS), description of functional business requirements, and sample data. Students are asked to design and develop a database to improve the management of the company's customers, products, and purchases by emphasizing…
Optical fibers for FTTH application
NASA Astrophysics Data System (ADS)
Guzowski, Bartlomiej; Tosik, Grzegorz; Lisik, Zbigniew; Bedyk, Michal; Kubiak, Andrzej
2013-07-01
In this paper the specifics of FTTH (Fiber To The Home) networks in terms of requirements for optical fibers has been presented. Optical fiber samples used in FTTH applications acquired from the worldwide leading manufacturers were subjected to small diameter mandrel wraps tests. The detailed procedures of performed tests and the measurement results has been presented.
36 CFR 228.4 - Plan of operations-notice of intent-requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... occasionally removing small mineral samples or specimens, gold panning, metal detecting, non-motorized hand... identify the type of operations proposed and how they would be conducted, the type and standard of existing... § 228.12, the period during which the proposed activity will take place, and measures to be taken to...
Using large volume samplers for the monitoring of particle bound micro pollutants in rivers
NASA Astrophysics Data System (ADS)
Kittlaus, Steffen; Fuchs, Stephan
2015-04-01
The requirements of the WFD as well as substance emission modelling at the river basin scale require stable monitoring data for micro pollutants. The monitoring concepts applied by the local authorities as well as by many scientists use single sampling techniques. Samples from water bodies are usually taken in volumes of about one litre and depending on predetermined time steps or through discharge thresholds. For predominantly particle bound micro pollutants the small sample size of about one litre results in a very small amount of suspended particles. To measure micro pollutant concentrations in these samples is demanding and results in a high uncertainty of the measured concentrations, if the concentration is above the detection limit in the first place. In many monitoring programs most of the measured values were below the detection limit. This results in a high uncertainty if river loads were calculated from these data sets. The authors propose a different approach to gain stable concentration values for particle bound micro pollutants from river monitoring: A mixed sample of about 1000 L was pumped in a tank with a dirty-water pump. The sampling usually is done discharge dependant by using a gauge signal as input for the control unit. After the discharge event is over or the tank is fully filled, the suspended solids settle in the tank for 2 days. After this time a clear separation of water and solids can be shown. A sample (1 L) from the water phase and the total mass of the settled solids (about 10 L) are taken to the laboratory for analysis. While the micro pollutants can't hardly be detected in the water phase, the signal from the sediment is high above the detection limit, thus certain and very stable. From the pollutant concentration in the solid phase and the total tank volume the initial pollutant concentration in the sample can be calculated. If the concentration in the water phase is detectable, it can be used to correct the total load. This relatively low cost approach (less costs for analysis because of small sample number) allows to quantify the pollutant load, to derive dissolved-solid partition coefficients and to quantify the pollutant load in different particle size classes.
Stability and bias of classification rates in biological applications of discriminant analysis
Williams, B.K.; Titus, K.; Hines, J.E.
1990-01-01
We assessed the sampling stability of classification rates in discriminant analysis by using a factorial design with factors for multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. Simulation results indicated strong bias in correct classification rates when group sample sizes were small and when overlap among groups was high. We also found that stability of the correct classification rates was influenced by these factors, indicating that the number of samples required for a given level of precision increases with the amount of overlap among groups. In a review of 60 published studies, we found that 57% of the articles presented results on classification rates, though few of them mentioned potential biases in their results. Wildlife researchers should choose the total number of samples per group to be at least 2 times the number of variables to be measured when overlap among groups is low. Substantially more samples are required as the overlap among groups increases
Neural networks for the generation of sea bed models using airborne lidar bathymetry data
NASA Astrophysics Data System (ADS)
Kogut, Tomasz; Niemeyer, Joachim; Bujakiewicz, Aleksandra
2016-06-01
Various sectors of the economy such as transport and renewable energy have shown great interest in sea bed models. The required measurements are usually carried out by ship-based echo sounding, but this method is quite expensive. A relatively new alternative is data obtained by airborne lidar bathymetry. This study investigates the accuracy of these data, which was obtained in the context of the project `Investigation on the use of airborne laser bathymetry in hydrographic surveying'. A comparison to multi-beam echo sounding data shows only small differences in the depths values of the data sets. The IHO requirements of the total horizontal and vertical uncertainty for laser data are met. The second goal of this paper is to compare three spatial interpolation methods, namely Inverse Distance Weighting (IDW), Delaunay Triangulation (TIN), and supervised Artificial Neural Networks (ANN), for the generation of sea bed models. The focus of our investigation is on the amount of required sampling points. This is analyzed by manually reducing the data sets. We found that the three techniques have a similar performance almost independently of the amount of sampling data in our test area. However, ANN are more stable when using a very small subset of points.
Validating a biometric authentication system: sample size requirements.
Dass, Sarat C; Zhu, Yongfang; Jain, Anil K
2006-12-01
Authentication systems based on biometric features (e.g., fingerprint impressions, iris scans, human face images, etc.) are increasingly gaining widespread use and popularity. Often, vendors and owners of these commercial biometric systems claim impressive performance that is estimated based on some proprietary data. In such situations, there is a need to independently validate the claimed performance levels. System performance is typically evaluated by collecting biometric templates from n different subjects, and for convenience, acquiring multiple instances of the biometric for each of the n subjects. Very little work has been done in 1) constructing confidence regions based on the ROC curve for validating the claimed performance levels and 2) determining the required number of biometric samples needed to establish confidence regions of prespecified width for the ROC curve. To simplify the analysis that address these two problems, several previous studies have assumed that multiple acquisitions of the biometric entity are statistically independent. This assumption is too restrictive and is generally not valid. We have developed a validation technique based on multivariate copula models for correlated biometric acquisitions. Based on the same model, we also determine the minimum number of samples required to achieve confidence bands of desired width for the ROC curve. We illustrate the estimation of the confidence bands as well as the required number of biometric samples using a fingerprint matching system that is applied on samples collected from a small population.
NASA Technical Reports Server (NTRS)
Snead, C. J.; McCubbin, F. M.; Nakamura-Messenger, K.; Righter, K.
2018-01-01
The Astromaterials Acquisition and Curation office at NASA Johnson Space Center has established an Advanced Curation program that is tasked with developing procedures, technologies, and data sets necessary for the curation of future astromaterials collections as envisioned by NASA exploration goals. One particular objective of the Advanced Curation program is the development of new methods for the collection, storage, handling and characterization of small (less than 100 micrometer) particles. Astromaterials Curation currently maintains four small particle collections: Cosmic Dust that has been collected in Earth's stratosphere by ER2 and WB-57 aircraft, Comet 81P/Wild 2 dust returned by NASA's Stardust spacecraft, interstellar dust that was returned by Stardust, and asteroid Itokawa particles that were returned by the JAXA's Hayabusa spacecraft. NASA Curation is currently preparing for the anticipated return of two new astromaterials collections - asteroid Ryugu regolith to be collected by Hayabusa2 spacecraft in 2021 (samples will be provided by JAXA as part of an international agreement), and asteroid Bennu regolith to be collected by the OSIRIS-REx spacecraft and returned in 2023. A substantial portion of these returned samples are expected to consist of small particle components, and mission requirements necessitate the development of new processing tools and methods in order to maximize the scientific yield from these valuable acquisitions. Here we describe initial progress towards the development of applicable sample handling methods for the successful curation of future small particle collections.
Yamashita, Hitoyoshi; Morita, Masamune; Sugiura, Haruka; Fujiwara, Kei; Onoe, Hiroaki; Takinoue, Masahiro
2015-04-01
We report an easy-to-use generation method of biologically compatible monodisperse water-in-oil microdroplets using a glass-capillary-based microfluidic device in a tabletop mini-centrifuge. This device does not require complicated microfabrication; furthermore, only a small sample volume is required in experiments. Therefore, we believe that this method will assist biochemical and cell-biological experiments. Copyright © 2014 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Air sampling with solid phase microextraction
NASA Astrophysics Data System (ADS)
Martos, Perry Anthony
There is an increasing need for simple yet accurate air sampling methods. The acceptance of new air sampling methods requires compatibility with conventional chromatographic equipment, and the new methods have to be environmentally friendly, simple to use, yet with equal, or better, detection limits, accuracy and precision than standard methods. Solid phase microextraction (SPME) satisfies the conditions for new air sampling methods. Analyte detection limits, accuracy and precision of analysis with SPME are typically better than with any conventional air sampling methods. Yet, air sampling with SPME requires no pumps, solvents, is re-usable, extremely simple to use, is completely compatible with current chromatographic equipment, and requires a small capital investment. The first SPME fiber coating used in this study was poly(dimethylsiloxane) (PDMS), a hydrophobic liquid film, to sample a large range of airborne hydrocarbons such as benzene and octane. Quantification without an external calibration procedure is possible with this coating. Well understood are the physical and chemical properties of this coating, which are quite similar to those of the siloxane stationary phase used in capillary columns. The log of analyte distribution coefficients for PDMS are linearly related to chromatographic retention indices and to the inverse of temperature. Therefore, the actual chromatogram from the analysis of the PDMS air sampler will yield the calibration parameters which are used to quantify unknown airborne analyte concentrations (ppb v to ppm v range). The second fiber coating used in this study was PDMS/divinyl benzene (PDMS/DVB) onto which o-(2,3,4,5,6- pentafluorobenzyl) hydroxylamine (PFBHA) was adsorbed for the on-fiber derivatization of gaseous formaldehyde (ppb v range), with and without external calibration. The oxime formed from the reaction can be detected with conventional gas chromatographic detectors. Typical grab sampling times were as small as 5 seconds. With 300 seconds sampling, the formaldehyde detection limit was 2.1 ppbv, better than any other 5 minute sampling device for formaldehyde. The first-order rate constant for product formation was used to quantify formaldehyde concentrations without a calibration curve. This spot sampler was used to sample the headspace of hair gel, particle board, plant material and coffee grounds for formaldehyde, and other carbonyl compounds, with extremely promising results. The SPME sampling devices were also used for time- weighted average sampling (30 minutes to 16 hours). Finally, the four new SPME air sampling methods were field tested with side-by-side comparisons to standard air sampling methods, showing a tremendous use of SPME as an air sampler.
Automated blood-sample handling in the clinical laboratory.
Godolphin, W; Bodtker, K; Uyeno, D; Goh, L O
1990-09-01
The only significant advances in blood-taking in 25 years have been the disposable needle and evacuated blood-drawing tube. With the exception of a few isolated barcode experiments, most sample-tracking is performed through handwritten or computer-printed labels. Attempts to reduce the hazards of centrifugation have resulted in air-tight lids or chambers, the use of which is time-consuming and cumbersome. Most commonly used clinical analyzers require serum or plasma, distributed into specialized containers, unique to that analyzer. Aliquots for different tests are prepared by handpouring or pipetting. Moderate to large clinical laboratories perform so many different tests that even multi-analyzers performing multiple analyses on a single sample may account for only a portion of all tests ordered for a patient. Thus several aliquots of each specimen are usually required. We have developed a proprietary serial centrifuge and blood-collection tube suitable for incorporation into an automated or robotic sample-handling system. The system we propose is (a) safe--avoids or prevents biological danger to the many "handlers" of blood; (b) small--minimizes the amount of sample taken and space required to adapt to the needs of satellite and mobile testing, and direct interfacing with analyzers; (c) serial--permits each sample to be treated according to its own "merits," optimizes throughput, and facilitates flexible automation; and (d) smart--ensures quality results through monitoring and intelligent control of patient identification, sample characteristics, and separation process.
O'Malley, Keelia; Luckett, Brian G; Dunaway, Lauren Futrell; Bodor, J Nicholas; Rose, Donald
2015-01-01
Changes to the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) occurred in 2009 when supplemental foods offered through the programme were updated to align with current dietary recommendations. The present study reports on a new index developed to monitor the retail environment's adoption of these new food supply requirements in New Orleans. A 100-point WIC Availability Index (WIC-AI) was derived from new minimum state stocking requirements for WIC vendors. A sample of supermarkets, medium and small food stores was assessed in 2009 before changes were implemented and in 2010 after revisions had gone into effect. WIC-AI scores were utilized to compare differences in meeting requirements by store type, WIC vendor status and year of measurement. Supermarkets, medium and small WIC and non-WIC food stores in New Orleans, Louisiana, USA. At baseline supermarkets had the highest median WIC-AI score (93·3) followed by medium (69·8) and small food stores (48·0). Small WIC stores had a higher median WIC-AI score at baseline than small non-WIC stores (66·9 v. 38·0). Both medium and small WIC stores significantly increased their median WIC-AI scores between 2009 and 2010 (P<0·01). The increased median WIC-AI score in small food stores was largely attributed to increased availability of cereals and grains, juices and fruit, and infant fruit and vegetables. The WIC-AI is a simple tool useful in summarizing complex food store environment data and may be adapted for use in other states or a national level to inform food policy decisions and direction.
Good performance of an immunoassay based method for nevirapine measurements in human breast milk.
Salado-Rasmussen, Kirsten; Theilgaard, Zahra Persson; Chiduo, Mercy; Pedersen, Court; Gerstoft, Jan; Katzenstein, Terese Lea
2011-07-01
Understanding the distribution of antiretro-virals in breastfeeding HIV-positive mothers is essential, both for prevention of mother-to-child HIV transmission and for research on the development of drug resistance. The ARK nevirapine (NVP)-test is an immunoassay method for nevirapine measurements, developed and validated for plasma use. In this study, the ARK NVP-test was evaluated for measurement of nevirapine concentrations in breast milk. High performance liquid chromatography (HPLC) is the method currently used to determine nevirapine in breast milk. This method, however, requires complicated extraction techniques. The ARK method employs an immunoassay technology and requires a small sample volume (40 μL) and no pre-treatment of the samples. Commercial enzyme and antibody were used and calibration standards and quality controls were prepared from pooled breast milk from HIV-uninfected women. Clinical samples from HIV-infected women receiving a single-dose of nevirapine were analyzed. Precision and accuracy were evaluated with two concentrations of quality control materials analyzed in three replicates on four different days and was <4%, and between 96.5% and 104.6%, respectively. Clinical samples were analyzed and CVs ranged from 0.0% to 11.1%. The median nevirapine concentration in breast milk 1 week post-partum was 0.29 μg/mL (range 0.11-0.90 μg/mL) in women treated with a single-dose of nevirapine. The ease of use and small sample volume makes the ARK assay an attractive alternative to HPLC analyses for determinations of nevirapine concentrations in breast milk.
Wright, John J; Salvadori, Enrico; Bridges, Hannah R; Hirst, Judy; Roessler, Maxie M
2016-09-01
EPR-based potentiometric titrations are a well-established method for determining the reduction potentials of cofactors in large and complex proteins with at least one EPR-active state. However, such titrations require large amounts of protein. Here, we report a new method that requires an order of magnitude less protein than previously described methods, and that provides EPR samples suitable for measurements at both X- and Q-band microwave frequencies. We demonstrate our method by determining the reduction potential of the terminal [4Fe-4S] cluster (N2) in the intramolecular electron-transfer relay in mammalian respiratory complex I. The value determined by our method, E m7 =-158mV, is precise, reproducible, and consistent with previously reported values. Our small-volume potentiometric titration method will facilitate detailed investigations of EPR-active centres in non-abundant and refractory proteins that can only be prepared in small quantities. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Effect of sampling rate and record length on the determination of stability and control derivatives
NASA Technical Reports Server (NTRS)
Brenner, M. J.; Iliff, K. W.; Whitman, R. K.
1978-01-01
Flight data from five aircraft were used to assess the effects of sampling rate and record length reductions on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there were considerable reductions in sampling rate and/or record length. Small amplitude pulse maneuvers showed greater degradation of the derivative maneuvers than large amplitude pulse maneuvers when these reductions were made. Reducing the sampling rate was found to be more desirable than reducing the record length as a method of lessening the total computation time required without greatly degrading the quantity of the estimates.
A simple microviscometric approach based on Brownian motion tracking.
Hnyluchová, Zuzana; Bjalončíková, Petra; Karas, Pavel; Mravec, Filip; Halasová, Tereza; Pekař, Miloslav; Kubala, Lukáš; Víteček, Jan
2015-02-01
Viscosity-an integral property of a liquid-is traditionally determined by mechanical instruments. The most pronounced disadvantage of such an approach is the requirement of a large sample volume, which poses a serious obstacle, particularly in biology and biophysics when working with limited samples. Scaling down the required volume by means of microviscometry based on tracking the Brownian motion of particles can provide a reasonable alternative. In this paper, we report a simple microviscometric approach which can be conducted with common laboratory equipment. The core of this approach consists in a freely available standalone script to process particle trajectory data based on a Newtonian model. In our study, this setup allowed the sample to be scaled down to 10 μl. The utility of the approach was demonstrated using model solutions of glycerine, hyaluronate, and mouse blood plasma. Therefore, this microviscometric approach based on a newly developed freely available script can be suggested for determination of the viscosity of small biological samples (e.g., body fluids).
NASA Technical Reports Server (NTRS)
Sand, F.; Christie, R.
1975-01-01
Extending the crop survey application of remote sensing from small experimental regions to state and national levels requires that a sample of agricultural fields be chosen for remote sensing of crop acreage, and that a statistical estimate be formulated with measurable characteristics. The critical requirements for the success of the application are reviewed in this report. The problem of sampling in the presence of cloud cover is discussed. Integration of remotely sensed information about crops into current agricultural crop forecasting systems is treated on the basis of the USDA multiple frame survey concepts, with an assumed addition of a new frame derived from remote sensing. Evolution of a crop forecasting system which utilizes LANDSAT and future remote sensing systems is projected for the 1975-1990 time frame.
Uchida, Hiroshi; Yoshinaga, Tokuji; Mori, Hirotoshi; Otsuka, Makoto
2010-11-01
This study aimed to apply a currently available chemometric near-infrared spectroscopy technique to the characterization of the polymorphic properties of drug candidates. The technique requires only small quantities of samples and is therefore applicable to drugs in the early stages of development. The combination of near-infrared spectroscopy and a patented 96-well plate divided into 32 individual, humidity-controlled, three-well compartments was used in the characterization of a hygroscopic drug, imidafenacin, which has two polymorphs and one pseudo-polymorph. Characterization was also conducted with powder X-ray diffraction and thermal analysis. The results were compared with those from routinely used conventional analyses. Both the microanalysis and conventional analysis successfully characterised the substance (transformation and relative stability among the two polymorphs and a pseudo-polymorph) depending on the storage conditions. Near-infrared spectroscopic analyses utilizing a humidity-controlled 96-well plate required only small amounts of the sample for characterization under the various conditions of relative humidity. Near-infrared microanalysis can be applied to polymorphic studies of small quantities of a drug candidate. The results also suggest that the method will predict the behaviors of a hygroscopic candidate in solid pharmaceutical preparations at the early stages of drug development. © 2010 The Authors. JPP © 2010 Royal Pharmaceutical Society of Great Britain.
Ultra-small-angle neutron scattering with azimuthal asymmetry
Gu, X.; Mildner, D. F. R.
2016-05-16
Small-angle neutron scattering (SANS) measurements from thin sections of rock samples such as shales demand as great a scattering vector range as possible because the pores cover a wide range of sizes. The limitation of the scattering vector range for pinhole SANS requires slit-smeared ultra-SANS (USANS) measurements that need to be converted to pinhole geometry. The desmearing algorithm is only successful for azimuthally symmetric data. Scattering from samples cut parallel to the plane of bedding is symmetric, exhibiting circular contours on a two-dimensional detector. Samples cut perpendicular to the bedding show elliptically dependent contours with the long axis corresponding tomore » the normal to the bedding plane. A method is given for converting such asymmetric data collected on a double-crystal diffractometer for concatenation with the usual pinhole-geometry SANS data. Furthermore, the aspect ratio from the SANS data is used to modify the slit-smeared USANS data to produce quasi-symmetric contours. Rotation of the sample about the incident beam may result in symmetric data but cannot extract the same information as obtained from pinhole geometry.« less
Ultra-small-angle neutron scattering with azimuthal asymmetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, X.; Mildner, D. F. R.
Small-angle neutron scattering (SANS) measurements from thin sections of rock samples such as shales demand as great a scattering vector range as possible because the pores cover a wide range of sizes. The limitation of the scattering vector range for pinhole SANS requires slit-smeared ultra-SANS (USANS) measurements that need to be converted to pinhole geometry. The desmearing algorithm is only successful for azimuthally symmetric data. Scattering from samples cut parallel to the plane of bedding is symmetric, exhibiting circular contours on a two-dimensional detector. Samples cut perpendicular to the bedding show elliptically dependent contours with the long axis corresponding tomore » the normal to the bedding plane. A method is given for converting such asymmetric data collected on a double-crystal diffractometer for concatenation with the usual pinhole-geometry SANS data. Furthermore, the aspect ratio from the SANS data is used to modify the slit-smeared USANS data to produce quasi-symmetric contours. Rotation of the sample about the incident beam may result in symmetric data but cannot extract the same information as obtained from pinhole geometry.« less
Krleza, Jasna Lenicek; Dorotic, Adrijana; Grzunov, Ana; Maradin, Miljenka
2015-01-01
Capillary blood sampling is a medical procedure aimed at assisting in patient diagnosis, management and treatment, and is increasingly used worldwide, in part because of the increasing availability of point-of-care testing. It is also frequently used to obtain small blood volumes for laboratory testing because it minimizes pain. The capillary blood sampling procedure can influence the quality of the sample as well as the accuracy of test results, highlighting the need for immediate, widespread standardization. A recent nationwide survey of policies and practices related to capillary blood sampling in medical laboratories in Croatia has shown that capillary sampling procedures are not standardized and that only a small proportion of Croatian laboratories comply with guidelines from the Clinical Laboratory Standards Institute (CLSI) or the World Health Organization (WHO). The aim of this document is to provide recommendations for capillary blood sampling. This document has been produced by the Working Group for Capillary Blood Sampling within the Croatian Society of Medical Biochemistry and Laboratory Medicine. Our recommendations are based on existing available standards and recommendations (WHO Best Practices in Phlebotomy, CLSI GP42-A6 and CLSI C46-A2), which have been modified based on local logistical, cultural, legal and regulatory requirements. We hope that these recommendations will be a useful contribution to the standardization of capillary blood sampling in Croatia. PMID:26524965
NASA Astrophysics Data System (ADS)
Roether, Wolfgang; Vogt, Martin; Vogel, Sandra; Sültenfuß, Jürgen
2013-06-01
We present a new method to obtain samples for the measurement of helium isotopes and neon in water, to replace the classical sampling procedure using clamped-off Cu tubing containers that we have been using so far. The new method saves the gas extraction step prior to admission to the mass spectrometer, which the classical method requires. Water is drawn into evacuated glass ampoules with subsequent flame sealing. Approximately 50% headspace is left, from which admission into the mass spectrometer occurs without further treatment. Extensive testing has shown that, with due care and with small corrections applied, the samples represent the gas concentrations in the water within ±0.07% (95% confidence level; ±0.05% with special handling). Fast evacuation is achieved by pumping on a small charge of water placed in the ampoule. The new method was successfully tested at sea in comparison with Cu-tubing sampling. We found that the ampoule samples were superior in data precision and that a lower percentage of samples were lost prior to measurement. Further measurements revealed agreement between the two methods in helium, 3He and neon within ±0.1%. The new method facilitates the dealing with large sample sets and minimizes the delay between sampling and measurement. The method is applicable also for gases other than helium and neon.
Krleza, Jasna Lenicek; Dorotic, Adrijana; Grzunov, Ana; Maradin, Miljenka
2015-01-01
Capillary blood sampling is a medical procedure aimed at assisting in patient diagnosis, management and treatment, and is increasingly used worldwide, in part because of the increasing availability of point-of-care testing. It is also frequently used to obtain small blood volumes for laboratory testing because it minimizes pain. The capillary blood sampling procedure can influence the quality of the sample as well as the accuracy of test results, highlighting the need for immediate, widespread standardization. A recent nationwide survey of policies and practices related to capillary blood sampling in medical laboratories in Croatia has shown that capillary sampling procedures are not standardized and that only a small proportion of Croatian laboratories comply with guidelines from the Clinical Laboratory Standards Institute (CLSI) or the World Health Organization (WHO). The aim of this document is to provide recommendations for capillary blood sampling. This document has been produced by the Working Group for Capillary Blood Sampling within the Croatian Society of Medical Biochemistry and Laboratory Medicine. Our recommendations are based on existing available standards and recommendations (WHO Best Practices in Phlebotomy, CLSI GP42-A6 and CLSI C46-A2), which have been modified based on local logistical, cultural, legal and regulatory requirements. We hope that these recommendations will be a useful contribution to the standardization of capillary blood sampling in Croatia.
Liu, Chunbo; Pan, Feng; Li, Yun
2016-07-29
Glutamate is of great importance in food and pharmaceutical industries. There is still lack of effective statistical approaches for fault diagnosis in the fermentation process of glutamate. To date, the statistical approach based on generalized additive model (GAM) and bootstrap has not been used for fault diagnosis in fermentation processes, much less the fermentation process of glutamate with small samples sets. A combined approach of GAM and bootstrap was developed for the online fault diagnosis in the fermentation process of glutamate with small sample sets. GAM was first used to model the relationship between glutamate production and different fermentation parameters using online data from four normal fermentation experiments of glutamate. The fitted GAM with fermentation time, dissolved oxygen, oxygen uptake rate and carbon dioxide evolution rate captured 99.6 % variance of glutamate production during fermentation process. Bootstrap was then used to quantify the uncertainty of the estimated production of glutamate from the fitted GAM using 95 % confidence interval. The proposed approach was then used for the online fault diagnosis in the abnormal fermentation processes of glutamate, and a fault was defined as the estimated production of glutamate fell outside the 95 % confidence interval. The online fault diagnosis based on the proposed approach identified not only the start of the fault in the fermentation process, but also the end of the fault when the fermentation conditions were back to normal. The proposed approach only used a small sample sets from normal fermentations excitements to establish the approach, and then only required online recorded data on fermentation parameters for fault diagnosis in the fermentation process of glutamate. The proposed approach based on GAM and bootstrap provides a new and effective way for the fault diagnosis in the fermentation process of glutamate with small sample sets.
Mixing problems in using indicators for measuring regional blood flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ushioda, E.; Nuwayhid, B.; Tabsh, K.
A basic requirement for using indicators for measuring blood flow is adequate mixing of the indicator with blood prior to sampling the site. This requirement has been met by depositing the indicator in the heart and sampling from an artery. Recently, authors have injected microspheres into veins and sampled from venous sites. The present studies were designed to investigate the mixing problems in sheep and rabbits by means of Cardio-Green and labeled microspheres. The indicators were injected at different points in the circulatory system, and blood was sampled at different levels of the venous and arterial systems. Results show themore » following: (a) When an indicator of small molecular size (Cardio-Green) is allowed to pass through the heart chambers, adequate mixing is achieved, yielding accurate and reproducible results. (b) When any indicator (Cardio-Green or microspheres) is injected into veins, and sampling is done at any point in the venous system, mixing is inadequate, yielding flow results which are inconsistent and erratic. (c) For an indicator or large molecular size (microspheres), injecting into the left side of the heart and sampling from arterial sites yield accurate and reproducible results regardless of whether blood is sampled continuously or intermittently.« less
Evaluating methods for monitoring populations of Mexican spotted owls: A case study
Jospeh L. Ganey; Gary C. White; David C. Bowden; Alan B. Franklin
2004-01-01
Monitoring population status of rare or elusive species presents special challenges. Understanding population trends requires separating signal (true and important changes in abundance) from noise (normal temporal and sampling variation; e.g., Block et al. 2001). This is particularly difficult when small numbers or elusive habits make it difficult to obtain precise...
Long-Term Soil Chemistry Changes in Aggrading Forest Ecosystems
Jennifer D. Knoepp; Wayne T. Swank
1994-01-01
Assessing potential long-term forest productivity requires identification of the processes regulating chemical changes in forest soils. We resampled the litter layer and upper two mineral soil horizons, A and AB/BA, in two aggrading southern Appalachian watersheds 20 yr after an earlier sampling. Soils from a mixed-hardwood watershed exhibited a small but significant...
77 FR 33742 - Announcement of Requirements and Registration for “My Air, My Health Challenge”
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-07
... health-pollution link. Solvers must be able to justify their chosen combination with research citations... piloting stage, but must show that the sensor will be ready to use in functional tests--at least at a small... locations between sample start and end 5. Raw measurement data (quantitative or semi-quantitative) as well...
Utilization and cost of log production from animal loging operations
Suraj P. Shrestha; Bobby L. Lanford; Robert B. Rummer; Mark Dubois
2006-01-01
Forest harvesting with animals is a labor-intensive operation. It is expensive to use machines on smaller woodlots, which require frequent moves if mechanically logged. So, small logging systems using animals may be more cost effective. In this study, work sampling was used for five animal logging operations in Alabama to measure productive and non-productive time...
Estimating forest attribute parameters for small areas using nearest neighbors techniques
Ronald E. McRoberts
2012-01-01
Nearest neighbors techniques have become extremely popular, particularly for use with forest inventory data. With these techniques, a population unit prediction is calculated as a linear combination of observations for a selected number of population units in a sample that are most similar, or nearest, in a space of ancillary variables to the population unit requiring...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-21
... samples that are identical in all material respects to the product. The Commission also emphasizes that... Small Carpets and Rugs (FF 2-70). A true copy, in English, of the accreditation and scope documents... status must submit to the Commission copies, in English, of their training documents showing how...
de Jonge, Martin D.; Ryan, Christopher G.; Jacobsen, Chris J.
2014-01-01
X-ray nanoprobes require coherent illumination to achieve optic-limited resolution, and so will benefit directly from diffraction-limited storage rings. Here, the example of high-resolution X-ray fluorescence tomography is focused on as one of the most voracious demanders of coherent photons, since the detected signal is only a small fraction of the incident flux. Alternative schemes are considered for beam delivery, sample scanning and detectors. One must consider as well the steps before and after the X-ray experiment: sample preparation and examination conditions, and analysis complexity due to minimum dose requirements and self-absorption. By understanding the requirements and opportunities for nanoscale fluorescence tomography, one gains insight into the R&D challenges in optics and instrumentation needed to fully exploit the source advances that diffraction-limited storage rings offer. PMID:25177992
Dry particle generation with a 3-D printed fluidized bed generator
Roesch, Michael; Roesch, Carolin; Cziczo, Daniel J.
2017-06-02
We describe the design and testing of PRIZE (PRinted fluidIZed bed gEnerator), a compact fluidized bed aerosol generator manufactured using stereolithography (SLA) printing. Dispersing small quantities of powdered materials – due to either rarity or expense – is challenging due to a lack of small, low-cost dry aerosol generators. With this as motivation, we designed and built a generator that uses a mineral dust or other dry powder sample mixed with bronze beads that sit atop a porous screen. A particle-free airflow is introduced, dispersing the sample as airborne particles. The total particle number concentrations and size distributions were measured duringmore » different stages of the assembling process to show that the SLA 3-D printed generator did not generate particles until the mineral dust sample was introduced. Furthermore, time-series measurements with Arizona Test Dust (ATD) showed stable total particle number concentrations of 10–150 cm -3, depending on the sample mass, from the sub- to super-micrometer size range. Additional tests with collected soil dust samples are also presented. PRIZE is simple to assemble, easy to clean, inexpensive and deployable for laboratory and field studies that require dry particle generation.« less
Dry particle generation with a 3-D printed fluidized bed generator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roesch, Michael; Roesch, Carolin; Cziczo, Daniel J.
We describe the design and testing of PRIZE (PRinted fluidIZed bed gEnerator), a compact fluidized bed aerosol generator manufactured using stereolithography (SLA) printing. Dispersing small quantities of powdered materials – due to either rarity or expense – is challenging due to a lack of small, low-cost dry aerosol generators. With this as motivation, we designed and built a generator that uses a mineral dust or other dry powder sample mixed with bronze beads that sit atop a porous screen. A particle-free airflow is introduced, dispersing the sample as airborne particles. The total particle number concentrations and size distributions were measured duringmore » different stages of the assembling process to show that the SLA 3-D printed generator did not generate particles until the mineral dust sample was introduced. Furthermore, time-series measurements with Arizona Test Dust (ATD) showed stable total particle number concentrations of 10–150 cm -3, depending on the sample mass, from the sub- to super-micrometer size range. Additional tests with collected soil dust samples are also presented. PRIZE is simple to assemble, easy to clean, inexpensive and deployable for laboratory and field studies that require dry particle generation.« less
Detecting insect pollinator declines on regional and global scales
Lubuhn, Gretchen; Droege, Sam; Connor, Edward F.; Gemmill-Herren, Barbara; Potts, Simon G.; Minckley, Robert L.; Griswold, Terry; Jean, Robert; Kula, Emanuel; Roubik, David W.; Cane, Jim; Wright, Karen W.; Frankie, Gordon; Parker, Frank
2013-01-01
Recently there has been considerable concern about declines in bee communities in agricultural and natural habitats. The value of pollination to agriculture, provided primarily by bees, is >$200 billion/year worldwide, and in natural ecosystems it is thought to be even greater. However, no monitoring program exists to accurately detect declines in abundance of insect pollinators; thus, it is difficult to quantify the status of bee communities or estimate the extent of declines. We used data from 11 multiyear studies of bee communities to devise a program to monitor pollinators at regional, national, or international scales. In these studies, 7 different methods for sampling bees were used and bees were sampled on 3 different continents. We estimated that a monitoring program with 200-250 sampling locations each sampled twice over 5 years would provide sufficient power to detect small (2-5%) annual declines in the number of species and in total abundance and would cost U.S.$2,000,000. To detect declines as small as 1% annually over the same period would require >300 sampling locations. Given the role of pollinators in food security and ecosystem function, we recommend establishment of integrated regional and international monitoring programs to detect changes in pollinator communities.
Determination of 99Tc in fresh water using TRU resin by ICP-MS.
Guérin, Nicolas; Riopel, Remi; Kramer-Tremblay, Sheila; de Silva, Nimal; Cornett, Jack; Dai, Xiongxin
2017-10-02
Technetium-99 ( 99 Tc) determination at trace level by inductively coupled plasma mass spectrometry (ICP-MS) is challenging because there is no readily available appropriate Tc isotopic tracer. A new method using Re as a recovery tracer to determine 99 Tc in fresh water samples, which does not require any evaporation step, was developed. Tc(VII) and Re(VII) were pre-concentrated on a small anion exchange resin (AER) cartridge from one litre of water sample. They were then efficiently eluted from the AER using a potassium permanganate (KMnO 4 ) solution. After the reduction of KMnO 4 in 2 M sulfuric acid solution, the sample was passed through a small TRU resin cartridge. Tc(VII) and Re(VII) retained on the TRU resin were eluted using near boiling water, which can be directly used for the ICP-MS measurement. The results for method optimisation, validation and application were reported. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Adaptive classifier for steel strip surface defects
NASA Astrophysics Data System (ADS)
Jiang, Mingming; Li, Guangyao; Xie, Li; Xiao, Mang; Yi, Li
2017-01-01
Surface defects detection system has been receiving increased attention as its precision, speed and less cost. One of the most challenges is reacting to accuracy deterioration with time as aged equipment and changed processes. These variables will make a tiny change to the real world model but a big impact on the classification result. In this paper, we propose a new adaptive classifier with a Bayes kernel (BYEC) which update the model with small sample to it adaptive for accuracy deterioration. Firstly, abundant features were introduced to cover lots of information about the defects. Secondly, we constructed a series of SVMs with the random subspace of the features. Then, a Bayes classifier was trained as an evolutionary kernel to fuse the results from base SVMs. Finally, we proposed the method to update the Bayes evolutionary kernel. The proposed algorithm is experimentally compared with different algorithms, experimental results demonstrate that the proposed method can be updated with small sample and fit the changed model well. Robustness, low requirement for samples and adaptive is presented in the experiment.
NASA Technical Reports Server (NTRS)
Shiller, Alan M.
2003-01-01
It is well-established that sampling and sample processing can easily introduce contamination into dissolved trace element samples if precautions are not taken. However, work in remote locations sometimes precludes bringing bulky clean lab equipment into the field and likewise may make timely transport of samples to the lab for processing impossible. Straightforward syringe filtration methods are described here for collecting small quantities (15 mL) of 0.45- and 0.02-microm filtered river water in an uncontaminated manner. These filtration methods take advantage of recent advances in analytical capabilities that require only small amounts of waterfor analysis of a suite of dissolved trace elements. Filter clogging and solute rejection artifacts appear to be minimal, although some adsorption of metals and organics does affect the first approximately 10 mL of water passing through the filters. Overall the methods are clean, easy to use, and provide reproducible representations of the dissolved and colloidal fractions of trace elements in river waters. Furthermore, sample processing materials can be prepared well in advance in a clean lab and transported cleanly and compactly to the field. Application of these methods is illustrated with data from remote locations in the Rocky Mountains and along the Yukon River. Evidence from field flow fractionation suggests that the 0.02-microm filters may provide a practical cutoff to distinguish metals associated with small inorganic and organic complexes from those associated with silicate and oxide colloids.
A universal TaqMan-based RT-PCR protocol for cost-efficient detection of small noncoding RNA.
Jung, Ulrike; Jiang, Xiaoou; Kaufmann, Stefan H E; Patzel, Volker
2013-12-01
Several methods for the detection of RNA have been developed over time. For small RNA detection, a stem-loop reverse primer-based protocol relying on TaqMan RT-PCR has been described. This protocol requires an individual specific TaqMan probe for each target RNA and, hence, is highly cost-intensive for experiments with small sample sizes or large numbers of different samples. We describe a universal TaqMan-based probe protocol which can be used to detect any target sequence and demonstrate its applicability for the detection of endogenous as well as artificial eukaryotic and bacterial small RNAs. While the specific and the universal probe-based protocol showed the same sensitivity, the absolute sensitivity of detection was found to be more than 100-fold lower for both than previously reported. In subsequent experiments, we found previously unknown limitations intrinsic to the method affecting its feasibility in determination of mature template RISC incorporation as well as in multiplexing. Both protocols were equally specific in discriminating between correct and incorrect small RNA targets or between mature miRNA and its unprocessed RNA precursor, indicating the stem-loop RT-primer, but not the TaqMan probe, triggers target specificity. The presented universal TaqMan-based RT-PCR protocol represents a cost-efficient method for the detection of small RNAs.
da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C
2009-05-30
Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.
High-speed AFM and the reduction of tip-sample forces
NASA Astrophysics Data System (ADS)
Miles, Mervyn; Sharma, Ravi; Picco, Loren
High-speed DC-mode AFM has been shown to be routinely capable of imaging at video rate, and, if required, at over 1000 frames per second. At sufficiently high tip-sample velocities in ambient conditions, the tip lifts off the sample surface in a superlubricity process which reduces the level of shear forces imposed on the sample by the tip and therefore reduces the potential damage and distortion of the sample being imaged. High-frequency mechanical oscillations, both lateral and vertical, have been reported to reduced the tip-sample frictional forces. We have investigated the effect of combining linear high-speed scanning with these small amplitude high-frequency oscillations with the aim of reducing further the force interaction in high-speed imaging. Examples of this new version of high-speed AFM imaging will be presented for biological samples.
Vegh, Viktor; Reutens, David C.
2016-01-01
Object We studied the feasibility of generating the variable magnetic fields required for ultra-low field nuclear magnetic resonance relaxometry with dynamically adjustable permanent magnets. Our motivation was to substitute traditional electromagnets by distributed permanent magnets, increasing system portability. Materials and Methods The finite element method (COMSOL®) was employed for the numerical study of a small permanent magnet array to calculate achievable magnetic field strength, homogeneity, switching time and magnetic forces. A manually operated prototype was simulated and constructed to validate the numerical approach and to verify the generated magnetic field. Results A concentric small permanent magnet array can be used to generate strong sample pre-polarisation and variable measurement fields for ultra-low field relaxometry via simple prescribed magnet rotations. Using the array, it is possible to achieve a pre-polarisation field strength above 100 mT and variable measurement fields ranging from 20–50 μT with 200 ppm absolute field homogeneity within a field-of-view of 5 x 5 x 5 cubic centimetres. Conclusions A dynamic small permanent magnet array can generate multiple highly homogeneous magnetic fields required in ultra-low field nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI) instruments. This design can significantly reduce the volume and energy requirements of traditional systems based on electromagnets, improving portability considerably. PMID:27271886
Recording 2-D Nutation NQR Spectra by Random Sampling Method
Sinyavsky, Nikolaj; Jadzyn, Maciej; Ostafin, Michal; Nogaj, Boleslaw
2010-01-01
The method of random sampling was introduced for the first time in the nutation nuclear quadrupole resonance (NQR) spectroscopy where the nutation spectra show characteristic singularities in the form of shoulders. The analytic formulae for complex two-dimensional (2-D) nutation NQR spectra (I = 3/2) were obtained and the condition for resolving the spectral singularities for small values of an asymmetry parameter η was determined. Our results show that the method of random sampling of a nutation interferogram allows significant reduction of time required to perform a 2-D nutation experiment and does not worsen the spectral resolution. PMID:20949121
Quantification of Protozoa and Viruses from Small Water Volumes
Bonilla, J. Alfredo; Bonilla, Tonya D.; Abdelzaher, Amir M.; Scott, Troy M.; Lukasik, Jerzy; Solo-Gabriele, Helena M.; Palmer, Carol J.
2015-01-01
Large sample volumes are traditionally required for the analysis of waterborne pathogens. The need for large volumes greatly limits the number of samples that can be processed. The goals of this study were to compare extraction and detection procedures for quantifying protozoan parasites and viruses from small volumes of marine water. The intent was to evaluate a logistically simpler method of sample collection and processing that would facilitate direct pathogen measures as part of routine monitoring programs. Samples were collected simultaneously using a bilayer device with protozoa capture by size (top filter) and viruses capture by charge (bottom filter). Protozoan detection technologies utilized for recovery of Cryptosporidium spp. and Giardia spp. were qPCR and the more traditional immunomagnetic separation—IFA-microscopy, while virus (poliovirus) detection was based upon qPCR versus plaque assay. Filters were eluted using reagents consistent with the downstream detection technologies. Results showed higher mean recoveries using traditional detection methods over qPCR for Cryptosporidium (91% vs. 45%) and poliovirus (67% vs. 55%) whereas for Giardia the qPCR-based methods were characterized by higher mean recoveries (41% vs. 28%). Overall mean recoveries are considered high for all detection technologies. Results suggest that simultaneous filtration may be suitable for isolating different classes of pathogens from small marine water volumes. More research is needed to evaluate the suitability of this method for detecting pathogens at low ambient concentration levels. PMID:26114244
Quantification of Protozoa and Viruses from Small Water Volumes.
Bonilla, J Alfredo; Bonilla, Tonya D; Abdelzaher, Amir M; Scott, Troy M; Lukasik, Jerzy; Solo-Gabriele, Helena M; Palmer, Carol J
2015-06-24
Large sample volumes are traditionally required for the analysis of waterborne pathogens. The need for large volumes greatly limits the number of samples that can be processed. The aims of this study were to compare extraction and detection procedures for quantifying protozoan parasites and viruses from small volumes of marine water. The intent was to evaluate a logistically simpler method of sample collection and processing that would facilitate direct pathogen measures as part of routine monitoring programs. Samples were collected simultaneously using a bilayer device with protozoa capture by size (top filter) and viruses capture by charge (bottom filter). Protozoan detection technologies utilized for recovery of Cryptosporidium spp. and Giardia spp. were qPCR and the more traditional immunomagnetic separation-IFA-microscopy, while virus (poliovirus) detection was based upon qPCR versus plaque assay. Filters were eluted using reagents consistent with the downstream detection technologies. Results showed higher mean recoveries using traditional detection methods over qPCR for Cryptosporidium (91% vs. 45%) and poliovirus (67% vs. 55%) whereas for Giardia the qPCR-based methods were characterized by higher mean recoveries (41% vs. 28%). Overall mean recoveries are considered high for all detection technologies. Results suggest that simultaneous filtration may be suitable for isolating different classes of pathogens from small marine water volumes. More research is needed to evaluate the suitability of this method for detecting pathogens at low ambient concentration levels.
In vitro platelet quality in storage containers used for pediatric transfusions.
Weiss, Sandra; Scammell, Kenneth; Levin, Elena; Culibrk, Brankica; Zolfaghari, Sima; Gyöngyössy-Issa, Maria I C; Acker, Jason P
2012-08-01
The in vitro quality of small-volume platelet (PLT) aliquots for pediatric transfusions was assessed to determine the best practice approach. Small volumes (50 mL) of single apheresis PLT components (APCs), collected on either CaridianBCT Trima or Haemonetics MCS+ instruments, were aliquoted on Days 2, 3, 4, and 5 postcollection into Fenwal PL1240 or 4R2014 bags or 60-mL polypropylene syringes. Samples were tested for in vitro quality at their recommended expiry times (4 hr for 4R2014 bags and syringes or Day 5 for PL1240 bags). Assays included pH, CD62P expression, and metabolic measures. CD62P expression increased throughout storage in all containers. Among the small-volume containers, pH, pCO(2) , lactate, and bicarbonate varied considerably. Regardless of the day of aliquoting, pCO(2) was significantly higher and pO(2) was significantly lower in gas-impermeable syringes than other containers. No bacterial growth was detected in any sample. The quality of APCs aliquoted into small-volume containers meets regulatory requirements and is generally equivalent to that of full-volume APCs at expiry. © 2012 American Association of Blood Banks.
Materials identification using a small-scale pixellated x-ray diffraction system
NASA Astrophysics Data System (ADS)
O'Flynn, D.; Crews, C.; Drakos, I.; Christodoulou, C.; Wilson, M. D.; Veale, M. C.; Seller, P.; Speller, R. D.
2016-05-01
A transmission x-ray diffraction system has been developed using a pixellated, energy-resolving detector (HEXITEC) and a small-scale, mains operated x-ray source (Amptek Mini-X). HEXITEC enables diffraction to be measured without the requirement of incident spectrum filtration, or collimation of the scatter from the sample, preserving a large proportion of the useful signal compared with other diffraction techniques. Due to this efficiency, sufficient molecular information for material identification can be obtained within 5 s despite the relatively low x-ray source power. Diffraction data are presented from caffeine, hexamine, paracetamol, plastic explosives and narcotics. The capability to determine molecular information from aspirin tablets inside their packaging is demonstrated. Material selectivity and the potential for a sample classification model is shown with principal component analysis, through which each different material can be clearly resolved.
Small Body Exploration Technologies as Precursors for Interstellar Robotics
NASA Astrophysics Data System (ADS)
Noble, R. J.; Sykes, M. V.
The scientific activities undertaken to explore our Solar System will be very similar to those required someday at other stars. The systematic exploration of primitive small bodies throughout our Solar System requires new technologies for autonomous robotic spacecraft. These diverse celestial bodies contain clues to the early stages of the Solar System's evolution, as well as information about the origin and transport of water-rich and organic material, the essential building blocks for life. They will be among the first objects studied at distant star systems. The technologies developed to address small body and outer planet exploration will form much of the technical basis for designing interstellar robotic explorers. The Small Bodies Assessment Group, which reports to NASA, initiated a Technology Forum in 2011 that brought together scientists and technologists to discuss the needs and opportunities for small body robotic exploration in the Solar System. Presentations and discussions occurred in the areas of mission and spacecraft design, electric power, propulsion, avionics, communications, autonomous navigation, remote sensing and surface instruments, sampling, intelligent event recognition, and command and sequencing software. In this paper, the major technology themes from the Technology Forum are reviewed, and suggestions are made for developments that will have the largest impact on realizing autonomous robotic vehicles capable of exploring other star systems.
Small Body Exploration Technologies as Precursors for Interstellar Robotics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noble, Robert; /SLAC; Sykes, Mark V.
The scientific activities undertaken to explore our Solar System will be the same as required someday at other stars. The systematic exploration of primitive small bodies throughout our Solar System requires new technologies for autonomous robotic spacecraft. These diverse celestial bodies contain clues to the early stages of the Solar System's evolution as well as information about the origin and transport of water-rich and organic material, the essential building blocks for life. They will be among the first objects studied at distant star systems. The technologies developed to address small body and outer planet exploration will form much of themore » technical basis for designing interstellar robotic explorers. The Small Bodies Assessment Group, which reports to NASA, initiated a Technology Forum in 2011 that brought together scientists and technologists to discuss the needs and opportunities for small body robotic exploration in the Solar System. Presentations and discussions occurred in the areas of mission and spacecraft design, electric power, propulsion, avionics, communications, autonomous navigation, remote sensing and surface instruments, sampling, intelligent event recognition, and command and sequencing software. In this paper, the major technology themes from the Technology Forum are reviewed, and suggestions are made for developments that will have the largest impact on realizing autonomous robotic vehicles capable of exploring other star systems.« less
NASA Astrophysics Data System (ADS)
Salerno, Antonio; de la Fuente, Isabel; Hsu, Zack; Tai, Alan; Chang, Hammer; McNamara, Elliott; Cramer, Hugo; Li, Daoping
2018-03-01
In next generation Logic devices, overlay control requirements shrink to sub 2.5nm level on-product overlay. Historically on-product overlay has been defined by the overlay capability of after-develop in-scribe targets. However, due to design and dimension, the after development metrology targets are not completely representative for the final overlay of the device. In addition, they are confined to the scribe-lane area, which limits the sampling possibilities. To address these two issues, metrology on structures matching the device structure and which can be sampled with high density across the device is required. Conventional after-etch CDSEM techniques on logic devices present difficulties in discerning the layers of interest, potential destructive charging effects and finally, they are limited by the long measurement times[1] [2] [3] . All together, limit the sampling densities and making CDSEM less attractive for control applications. Optical metrology can overcome most of these limitations. Such measurement, however, does require repetitive structures. This requirement is not fulfilled by logic devices, as the features vary in pitch and CD over the exposure field. The solution is to use small targets, with a maximum pad size of 5x5um2 , which can easily be placed in the logic cell area. These targets share the process and architecture of the device features of interest, but with a modified design that replicates as close as possible the device layout, allowing for in-device metrology for both CD and Overlay. This solution enables measuring closer to the actual product feature location and, not being limited to scribe-lanes, it opens the possibility of higher-density sampling schemes across the field. In summary, these targets become the facilitator of in-device metrology (IDM), that is, enabling the measurements both in-device Overlay and the CD parameters of interest and can deliver accurate, high-throughput, dense and after-etch measurements for Logic. Overlay improvements derived from a high-densely sampled Overlay map measured with 5x5 um2 In Device Metrology (IDM) targets were investigated on a customer Logic application. In this work we present both the main design aspects of the 5x5 um2 IDM targets, as well as the results on the improved Overlay performance.
NASA Astrophysics Data System (ADS)
Li, Y.; Capatina, D.; D'Amico, K.; Eng, P.; Hawreliak, J.; Graber, T.; Rickerson, D.; Klug, J.; Rigg, P. A.; Gupta, Y. M.
2017-06-01
Coupling laser-driven compression experiments to the x-ray beam at the Dynamic Compression Sector (DCS) at the Advanced Photon Source (APS) of Argonne National Laboratory requires state-of-the-art x-ray focusing, pulse isolation, and diagnostics capabilities. The 100J UV pulsed laser system can be fired once every 20 minutes so precise alignment and focusing of the x-rays on each new sample must be fast and reproducible. Multiple Kirkpatrick-Baez (KB) mirrors are used to achieve a focal spot size as small as 50 μm at the target, while the strategic placement of scintillating screens, cameras, and detectors allows for fast diagnosis of the beam shape, intensity, and alignment of the sample to the x-ray beam. In addition, a series of x-ray choppers and shutters are used to ensure that the sample is exposed to only a single x-ray pulse ( 80ps) during the dynamic compression event and require highly precise synchronization. Details of the technical requirements, layout, and performance of these instruments will be presented. Work supported by DOE/NNSA.
How to justify small-refinery info/control system modernization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haskins, D.E.
1993-05-01
Information and control systems modernization can be justified by successful implementation of advanced process control (APC) in nearly all refineries, even the small ones. However, the small refineries require special solutions to meet the challenges of limited resources in both finance and manpower. Based on a number of case studies, a typical small refinery as it operates today is described. A sample information and control system modernization plan is described and the typical cost and benefits show how the project cost can be justified. Business objectives of an HPI plant are to satisfy customers by providing specific products, to satisfymore » the owners by maximizing profits and to satisfy the public by being safe and environmentally correct. Managers have always tried to meet these objectives with functions for the total plant.« less
The nail and hair in forensic science.
Daniel, C Ralph; Piraccini, Bianca Maria; Tosti, Antonella
2004-02-01
Drugs, chemicals, and biological substances accumulate and are stored in hair and nails where they can be detected and measured. Advantages of analyzing hair and nail samples also include their easy and non-invasive collection, the small sample size required for analysis, and their easy storage at room temperature. We report 3 examples of heavy metal poisoning diagnosed because of the hair or nail symptoms. Drugs and toxins that can be detected in hair and nails are reviewed and the application of hair/nail analysis in general and in forensic medicine is discussed.
Gray, John E.; Lee, G.K.; O'Leary, R. M.; Theodorakos, P.M.
1999-01-01
In the summer of 1991, we conducted a reconnaissance geochemical survey around the Fortyseven Creek Au-As-Sb-W prospect that is located in the southwestern part of the Sleetmute quadrangle. At that time, this project was a small part of a more comprehensive Alaska Mineral Resource Assessment Program (AMRAP) study of the Sleemute quadrangle. AMRAP studies were conducted by the U.S. Geological Survey (USGS) to fulfill requirements of the Alaska National Interests Lands Conservation Act (Public Law 96-487, 1980) to survey certain federal lands to determine their mineral potential. Although AMRAP is no longer in operation, this study represents a small topical study that was conducted during the Sleetmute quadrangle AMRAP study. The objective of the Fortyseven Creek work was to characterize the geochemistry of samples collected downstream from the Fortyseven Creek prospect, as well as mineralized and altered rock samples collected from the prospect. In this report, we describe the samples collected in 1991, the methods used for the analysis of the samples, and the geochemical data for these samples. The data in this report are also available in digital form on computer diskette in Gray and others (1999). An interpretation of these data appears in Gray and others (1998).
NASA Astrophysics Data System (ADS)
OBrien, R. E.; Ridley, K. J.; Canagaratna, M. R.; Croteau, P.; Budisulistiorini, S. H.; Cui, T.; Green, H. S.; Surratt, J. D.; Jayne, J. T.; Kroll, J. H.
2016-12-01
A thorough understanding of the sources, evolution, and budgets of atmospheric organic aerosol requires widespread measurements of the amount and chemical composition of atmospheric organic carbon in the condensed phase (within particles and water droplets). Collecting such datasets requires substantial spatial and temporal (long term) coverage, which can be challenging when relying on online measurements by state-of-the-art research-grade instrumentation (such as those used in atmospheric chemistry field studies). Instead, samples are routinely collected using relatively low-cost techniques, such as aerosol filters, for offline analysis of their chemical composition. However, measurements made by online and offline instruments can be fundamentally different, leading to disparities between data from field studies and those from more routine monitoring. To better connect these two approaches, and take advantage of the benefits of each, we have developed a method to introduce collected samples into online aerosol instruments using nebulization. Because nebulizers typically require tens to hundreds of milliliters of solution, limiting this technique to large samples, we developed a new, ultrasonic micro-nebulizer that requires only small volumes (tens of microliters) of sample for chemical analysis. The nebulized (resuspended) sample is then sent into a high-resolution Aerosol Mass Spectrometer (AMS), a widely-used instrument that provides key information on the chemical composition of aerosol particulate matter (elemental ratios, carbon oxidation state, etc.), measurements that are not typically made for collected atmospheric samples. Here, we compare AMS data collected using standard on-line techniques with our offline analysis, demonstrating the utility of this new technique to aerosol filter samples. We then apply this approach to organic aerosol filter samples collected in remote regions, as well as rainwater samples from across the US. This data provides information on the sample composition and changes in key chemical characteristics across locations and seasons.
Hollow fiber-optic Raman probes for small experimental animals
NASA Astrophysics Data System (ADS)
Katagiri, Takashi; Hattori, Yusuke; Suzuki, Toshiaki; Matsuura, Yuji; Sato, Hidetoshi
2007-02-01
Two types of hollow fiber-optic probes are developed to measure the in vivo Raman spectra of small animals. One is the minimized probe which is end-sealed with the micro-ball lens. The measured spectra reflect the information of the sample's sub-surface. This probe is used for the measurement of the esophagus and the stomach via an endoscope. The other probe is a confocal Raman probe which consists of a single fiber and a lens system. It is integrated into the handheld microscope. A simple and small multimodal probe is realized because the hollow optical fiber requires no optical filters. The performance of each probe is examined and the effectiveness of these probes for in vivo Raman spectroscopy is shown by animal tests.
Extrapolation of rotating sound fields.
Carley, Michael
2018-03-01
A method is presented for the computation of the acoustic field around a tonal circular source, such as a rotor or propeller, based on an exact formulation which is valid in the near and far fields. The only input data required are the pressure field sampled on a cylindrical surface surrounding the source, with no requirement for acoustic velocity or pressure gradient information. The formulation is approximated with exponentially small errors and appears to require input data at a theoretically minimal number of points. The approach is tested numerically, with and without added noise, and demonstrates excellent performance, especially when compared to extrapolation using a far-field assumption.
Evaluation of counting methods for oceanic radium-228
NASA Astrophysics Data System (ADS)
Orr, James C.
1988-07-01
Measurement of open ocean 228Ra is difficult, typically requiring at least 200 L of seawater. The burden of collecting and processing these large-volume samples severely limits the widespread use of this promising tracer. To use smaller-volume samples, a more sensitive means of analysis is required. To seek out new and improved counting method(s), conventional 228Ra counting methods have been compared with some promising techniques which are currently used for other radionuclides. Of the conventional methods, α spectrometry possesses the highest efficiency (3-9%) and lowest background (0.0015 cpm), but it suffers from the need for complex chemical processing after sampling and the need to allow about 1 year for adequate ingrowth of 228Th granddaughter. The other two conventional counting methods measure the short-lived 228Ac daughter while it remains supported by 228Ra, thereby avoiding the complex sample processing and the long delay before counting. The first of these, high-resolution γ spectrometry, offers the simplest processing and an efficiency (4.8%) comparable to α spectrometry; yet its high background (0.16 cpm) and substantial equipment cost (˜30,000) limit its widespread use. The second no-wait method, β-γ coincidence spectrometry, also offers comparable efficiency (5.3%), but it possesses both lower background (0.0054 cpm) and lower initial cost (˜12,000). Three new (i.e., untried for 228Ra) techniques all seem to promise about a fivefold increase in efficiency over conventional methods. By employing liquid scintillation methods, both α spectrometry and β-γ coincidence spectrometry can improve their counter efficiency while retaining low background. The third new 228Ra counting method could be adapted from a technique which measures 224Ra by 220Rn emanation. After allowing for ingrowth and then counting for the 224Ra great-granddaughter, 228Ra could be back calculated, thereby yielding a method with high efficiency, where no sample processing is required. The efficiency and background of each of the three new methods have been estimated and are compared with those of the three methods currently employed to measure oceanic 228Ra. From efficiency and background, the relative figure of merit and the detection limit have been determined for each of the six counters. These data suggest that the new counting methods have the potential to measure most 228Ra samples with just 30 L of seawater, to better than 5% precision. Not only would this reduce the time, effort, and expense involved in sample collection, but 228Ra could then be measured on many small-volume samples (20-30 L) previously collected with only 226Ra in mind. By measuring 228Ra quantitatively on such small-volume samples, three analyses (large-volume 228Ra, large-volume 226Ra, and small-volume 226Ra) could be reduced to one, thereby dramatically improving analytical precision.
Sampling requirements for forage quality characterization of rectangular hay bales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheaffer, C.C.; Martin, N.P.; Jewett, J.G.
2000-02-01
Commercial lots of alfalfa (Medicago sativa L.) hay are often bought and sold on the basis of forage quality. Proper sampling is essential to obtain accurate forage quality results for pricing of alfalfa hay, but information about sampling is limited to small, 20- to 40-kg rectangular bales. Their objectives were to determine the within-bale variation in 400-kg rectangular bales and to determine the number and distribution of core samples required to represent the crude protein (CP), acid detergent fiber (ADF), neutral detergent fiber (NDF), and dry matter (DM) concentration in commercial lots of alfalfa hay. Four bales were selected frommore » each of three hay lots and core sampled nine times per side for a total of 54 cores per bale. There was no consistent pattern of forage quality variation within bales. Averaged across lots, any portion of a bale was highly correlated with bale grand means for CP, ADF, NDF, and DM. Three lots of hay were probed six times per bale, one core per bale side from 55, 14, and 14 bales per lot. For determination of CP, ADF, NDF, and DM concentration, total core numbers required to achieve an acceptable standard error (SE) were minimized by sampling once per bale. Bootstrap analysis of data from the most variable hay lot suggested that forage quality of any lot of 400-kg alfalfa hay bales should be adequately represented by 12 bales sampled once per bale.« less
William J. Zielinski; Fredrick V. Schlexer; T. Luke George; Kristine L. Pilgrim; Michael K. Schwartz
2013-01-01
The Point Arena mountain beaver (Aplodontia rufa nigra) is federally listed as an endangered subspecies that is restricted to a small geographic range in coastal Mendocino County, California. Management of this imperiled taxon requires accurate information on its demography and vital rates. We developed noninvasive survey methods, using hair snares to sample DNA and to...
2012-03-01
Cross-sectional assessments versus more specialized thematic or sectional surveys • Formal, structured and often scientific assessments as...required output. Data collection issues - Observation. - Interviews. - Surveys . - Checklists. - Sampling. - Indicators and standards...Jacket Sleeping bag / pad Cash, $50 min, small bills/coins Poncho/rain suit Deodorant Toothbrush/paste Shampoo Mouthwash Dental floss Hand
NASA Technical Reports Server (NTRS)
Panzarella, Charles
2004-01-01
As humans prepare for the exploration of our solar system, there is a growing need for miniaturized medical and environmental diagnostic devices for use on spacecrafts, especially during long-duration space missions where size and power requirements are critical. In recent years, the biochip (or Lab-on-a-Chip) has emerged as a technology that might be able to satisfy this need. In generic terms, a biochip is a miniaturized microfluidic device analogous to the electronic microchip that ushered in the digital age. It consists of tiny microfluidic channels, pumps and valves that transport small amounts of sample fluids to biosensors that can perform a variety of tests on those fluids in near real time. It has the obvious advantages of being small, lightweight, requiring less sample fluids and reagents and being more sensitive and efficient than larger devices currently in use. Some of the desired space-based applications would be to provide smaller, more robust devices for analyzing blood, saliva and urine and for testing water and food supplies for the presence of harmful contaminants and microorganisms. Our group has undertaken the goal of adapting as well as improving upon current biochip technology for use in long-duration microgravity environments.
Accurate high-speed liquid handling of very small biological samples.
Schober, A; Günther, R; Schwienhorst, A; Döring, M; Lindemann, B F
1993-08-01
Molecular biology techniques require the accurate pipetting of buffers and solutions with volumes in the microliter range. Traditionally, hand-held pipetting devices are used to fulfill these requirements, but many laboratories have also introduced robotic workstations for the handling of liquids. Piston-operated pumps are commonly used in manually as well as automatically operated pipettors. These devices cannot meet the demands for extremely accurate pipetting of very small volumes at the high speed that would be necessary for certain applications (e.g., in sequencing projects with high throughput). In this paper we describe a technique for the accurate microdispensation of biochemically relevant solutions and suspensions with the aid of a piezoelectric transducer. It is suitable for liquids of a viscosity between 0.5 and 500 milliPascals. The obtainable drop sizes range from 5 picoliters to a few nanoliters with up to 10,000 drops per second. Liquids can be dispensed in single or accumulated drops to handle a wide volume range. The system proved to be excellently suitable for the handling of biological samples. It did not show any detectable negative impact on the biological function of dissolved or suspended molecules or particles.
NASA Technical Reports Server (NTRS)
Peterson, Thomas M.
2001-01-01
The next series of planetary exploration missions require a method of extracting rock and soil core samples. Therefore a prototype ultrasonic core driller (UTCD) was developed to meet the constraints of Small Bodies Exploration and Mars Sample Return Missions. The constraints in the design are size, weight, power, and axial loading. The ultrasonic transducer requires a relatively low axial load, which is one of the reasons this technology was chosen. The ultrasonic generator breadboard section can be contained within the 5x5x3 limits and weighs less than two pounds. Based on results attained the objectives for the first phase were achieved. A number of transducer probes were made and tested. One version only drills, and the other will actually provide a small core from a rock. Because of a more efficient transducer/probe, it will run at very low power (less than 5 Watts) and still drill/core. The prototype generator was built to allow for variation of all the performance-effecting elements of the transducer/probe/end effector, i.e., pulse, duty cycle, frequency, etc. The heart of the circuitry is what will be converted to a surface mounted board for the next phase, after all the parameters have been optimized and the microprocessor feedback can be installed.
Real-Time Microfluidic Blood-Counting System for PET and SPECT Preclinical Pharmacokinetic Studies.
Convert, Laurence; Lebel, Réjean; Gascon, Suzanne; Fontaine, Réjean; Pratte, Jean-François; Charette, Paul; Aimez, Vincent; Lecomte, Roger
2016-09-01
Small-animal nuclear imaging modalities have become essential tools in the development process of new drugs, diagnostic procedures, and therapies. Quantification of metabolic or physiologic parameters is based on pharmacokinetic modeling of radiotracer biodistribution, which requires the blood input function in addition to tissue images. Such measurements are challenging in small animals because of their small blood volume. In this work, we propose a microfluidic counting system to monitor rodent blood radioactivity in real time, with high efficiency and small detection volume (∼1 μL). A microfluidic channel is built directly above unpackaged p-i-n photodiodes to detect β-particles with maximum efficiency. The device is embedded in a compact system comprising dedicated electronics, shielding, and pumping unit controlled by custom firmware to enable measurements next to small-animal scanners. Data corrections required to use the input function in pharmacokinetic models were established using calibrated solutions of the most common PET and SPECT radiotracers. Sensitivity, dead time, propagation delay, dispersion, background sensitivity, and the effect of sample temperature were characterized. The system was tested for pharmacokinetic studies in mice by quantifying myocardial perfusion and oxygen consumption with (11)C-acetate (PET) and by measuring the arterial input function using (99m)TcO4 (-) (SPECT). Sensitivity for PET isotopes reached 20%-47%, a 2- to 10-fold improvement relative to conventional catheter-based geometries. Furthermore, the system detected (99m)Tc-based SPECT tracers with an efficiency of 4%, an outcome not possible through a catheter. Correction for dead time was found to be unnecessary for small-animal experiments, whereas propagation delay and dispersion within the microfluidic channel were accurately corrected. Background activity and sample temperature were shown to have no influence on measurements. Finally, the system was successfully used in animal studies. A fully operational microfluidic blood-counting system for preclinical pharmacokinetic studies was developed. Microfluidics enabled reliable and high-efficiency measurement of the blood concentration of most common PET and SPECT radiotracers with high temporal resolution in small blood volume. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Coagulation measurement from whole blood using vibrating optical fiber in a disposable cartridge
NASA Astrophysics Data System (ADS)
Yaraş, Yusuf Samet; Gündüz, Ali Bars; Saǧlam, Gökhan; Ölçer, Selim; Civitçi, Fehmi; Baris, İbrahim; Yaralioǧlu, Göksenin; Urey, Hakan
2017-11-01
In clinics, blood coagulation time measurements are performed using mechanical measurements with blood plasma. Such measurements are challenging to do in a lab-on-a-chip (LoC) system using a small volume of whole blood. Existing LoC systems use indirect measurement principles employing optical or electrochemical methods. We developed an LoC system using mechanical measurements with a small volume of whole blood without requiring sample preparation. The measurement is performed in a microfluidic channel where two fibers are placed inline with a small gap in between. The first fiber operates near its mechanical resonance using remote magnetic actuation and immersed in the sample. The second fiber is a pick-up fiber acting as an optical sensor. The microfluidic channel is engineered innovatively such that the blood does not block the gap between the vibrating fiber and the pick-up fiber, resulting in high signal-to-noise ratio optical output. The control plasma test results matched well with the plasma manufacturer's datasheet. Activated-partial-thromboplastin-time tests were successfully performed also with human whole blood samples, and the method is proven to be effective. Simplicity of the cartridge design and cost of readily available materials enable a low-cost point-of-care device for blood coagulation measurements.
Coagulation measurement from whole blood using vibrating optical fiber in a disposable cartridge.
Yaraş, Yusuf Samet; Gündüz, Ali Bars; Sağlam, Gökhan; Ölçer, Selim; Civitçi, Fehmi; Baris, İbrahim; Yaralioğlu, Göksenin; Urey, Hakan
2017-11-01
In clinics, blood coagulation time measurements are performed using mechanical measurements with blood plasma. Such measurements are challenging to do in a lab-on-a-chip (LoC) system using a small volume of whole blood. Existing LoC systems use indirect measurement principles employing optical or electrochemical methods. We developed an LoC system using mechanical measurements with a small volume of whole blood without requiring sample preparation. The measurement is performed in a microfluidic channel where two fibers are placed inline with a small gap in between. The first fiber operates near its mechanical resonance using remote magnetic actuation and immersed in the sample. The second fiber is a pick-up fiber acting as an optical sensor. The microfluidic channel is engineered innovatively such that the blood does not block the gap between the vibrating fiber and the pick-up fiber, resulting in high signal-to-noise ratio optical output. The control plasma test results matched well with the plasma manufacturer's datasheet. Activated-partial-thromboplastin-time tests were successfully performed also with human whole blood samples, and the method is proven to be effective. Simplicity of the cartridge design and cost of readily available materials enable a low-cost point-of-care device for blood coagulation measurements. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Evaluation of AUC(0-4) predictive methods for cyclosporine in kidney transplant patients.
Aoyama, Takahiko; Matsumoto, Yoshiaki; Shimizu, Makiko; Fukuoka, Masamichi; Kimura, Toshimi; Kokubun, Hideya; Yoshida, Kazunari; Yago, Kazuo
2005-05-01
Cyclosporine (CyA) is the most commonly used immunosuppressive agent in patients who undergo kidney transplantation. Dosage adjustment of CyA is usually based on trough levels. Recently, trough levels have been replacing the area under the concentration-time curve during the first 4 h after CyA administration (AUC(0-4)). The aim of this study was to compare the predictive values obtained using three different methods of AUC(0-4) monitoring. AUC(0-4) was calculated from 0 to 4 h in early and stable renal transplant patients using the trapezoidal rule. The predicted AUC(0-4) was calculated using three different methods: the multiple regression equation reported by Uchida et al.; Bayesian estimation for modified population pharmacokinetic parameters reported by Yoshida et al.; and modified population pharmacokinetic parameters reported by Cremers et al. The predicted AUC(0-4) was assessed on the basis of predictive bias, precision, and correlation coefficient. The predicted AUC(0-4) values obtained using three methods through measurement of three blood samples showed small differences in predictive bias, precision, and correlation coefficient. In the prediction of AUC(0-4) measurement of one blood sample from stable renal transplant patients, the performance of the regression equation reported by Uchida depended on sampling time. On the other hand, the performance of Bayesian estimation with modified pharmacokinetic parameters reported by Yoshida through measurement of one blood sample, which is not dependent on sampling time, showed a small difference in the correlation coefficient. The prediction of AUC(0-4) using a regression equation required accurate sampling time. In this study, the prediction of AUC(0-4) using Bayesian estimation did not require accurate sampling time in the AUC(0-4) monitoring of CyA. Thus Bayesian estimation is assumed to be clinically useful in the dosage adjustment of CyA.
Bedside arterial blood gas monitoring system using fluorescent optical sensors
NASA Astrophysics Data System (ADS)
Bartnik, Daniel J.; Rymut, Russell A.
1995-05-01
We describe a bedside arterial blood gas (ABG) monitoring system which uses fluorescent optical sensors in the measurement of blood pH, PCO2 and PO2. The Point-of-Care Arterial Blood Gas Monitoring System consists of the SensiCathTM optical sensor unit manufactured by Optical Sensors Incorporated and the TramTM Critical Care Monitoring System with ABG Module manufactured by Marquette Electronics Incorporated. Current blood gas measurement techniques require a blood sample to be removed from the patient and transported to an electrochemical analyzer for analysis. The ABG system does not require removal of blood from the patient or transport of the sample. The sensor is added to the patient's existing arterial line. ABG measurements are made by drawing a small blood sample from the arterial line in sufficient quantity to ensure an undiluted sample at the sensor. Measurements of pH, PCO2 and PO2 are made within 60 seconds. The blood is then returned to the patient, the line flushed and results appear on the bedside monitor. The ABG system offers several advantages over traditional electrochemical analyzers. Since the arterial line remains closed during the blood sampling procedure the patient's risk of infection is reduced and the caregiver's exposure to blood is eliminated. The single-use, disposable sensor can be measure 100 blood samples over 72 hours after a single two-point calibration. Quality Assurance checks are also available and provide the caregiver the ability to assess system performance even after the sensor is patient attached. The ABG module integrates with an existing bedside monitoring system. This allows ABG results to appear on the same display as ECG, respiration, blood pressure, cardiac output, SpO2, and other clinical information. The small module takes up little space in the crowded intensive care unit. Performance studies compare the ABG system with an electrochemical blood gas analyzer. Study results demonstrated accurate and precise blood gas measurement of 100 samples and 72 hour performance without need for re-calibration.
Laser Diffraction Techniques Replace Sieving for Lunar Soil Particle Size Distribution Data
NASA Technical Reports Server (NTRS)
Cooper, Bonnie L.; Gonzalez, C. P.; McKay, D. S.; Fruland, R. L.
2012-01-01
Sieving was used extensively until 1999 to determine the particle size distribution of lunar samples. This method is time-consuming, and requires more than a gram of material in order to obtain a result in which one may have confidence. This is demonstrated by the difference in geometric mean and median for samples measured by [1], in which a 14-gram sample produced a geometric mean of approx.52 micrometers, whereas two other samples of 1.5 grams resulted in gave means of approx.63 and approx.69 micrometers. Sample allocations for sieving are typically much smaller than a gram, and many of the sample allocations received by our lab are 0.5 to 0.25 grams in mass. Basu [2] has described how the finest fraction of the soil is easily lost in the sieving process, and this effect is compounded when sample sizes are small.
Sharifiyazdi, Hassan; Nazifi, Saeed; Shirzad Aski, Hesamaddin; Shayegh, Hossein
2014-09-01
Hemoplasmas are the trivial name for a group of erythrocyte-parasitizing bacteria of the genus Mycoplasma. This study is the first report of hemoplasma infection in Small Indian Mongoose (Herpestes Javanicus) based on molecular analysis of 16S rDNA. Whole blood samples were collected by sterile methods, from 14 live captured mongooses, in the south of Iran. Candidatus Mycoplasma turicensis (CMt)-like hemoplasma was detected in blood samples from one animal tested. BLAST search and phylogenetic analysis of partial 16S rDNA sequence (933bp) of the hemoplasma from Small Indian mongoose (KJ530704) revealed only 96-97% identity to the previously described CMt followed by 95% and 91% similarity with Mycoplasma coccoides and Mycoplasma haemomuris, respectively. Accordingly, the Iranian mongoose CMt isolate showed a high intra-specific genetic variation compared to all previously reported CMt strains in GenBank. Further molecular studies using multiple phylogenetic markers are required to characterize the exact species of Mongoose-derived hemoplasma. Copyright © 2014 Elsevier Ltd. All rights reserved.
Catch of channel catfish with tandem-set hoop nets and gill nets in lentic systems of Nebraska
Richters, Lindsey K.; Pope, Kevin L.
2011-01-01
Twenty-six Nebraska water bodies representing two ecosystem types (small standing waters and large standing waters) were surveyed during 2008 and 2009 with tandem-set hoop nets and experimental gill nets to determine if similar trends existed in catch rates and size structures of channel catfish Ictalurus punctatus captured with these gears. Gear efficiency was assessed as the number of sets (nets) that would be required to capture 100 channel catfish given observed catch per unit effort (CPUE). Efficiency of gill nets was not correlated with efficiency of hoop nets for capturing channel catfish. Small sample sizes prohibited estimation of proportional size distributions in most surveys; in the four surveys for which sample size was sufficient to quantify length-frequency distributions of captured channel catfish, distributions differed between gears. The CPUE of channel catfish did not differ between small and large water bodies for either gear. While catch rates of hoop nets were lower than rates recorded in previous studies, this gear was more efficient than gill nets at capturing channel catfish. However, comparisons of size structure between gears may be problematic.
TNO/Centaurs grouping tested with asteroid data sets
NASA Astrophysics Data System (ADS)
Fulchignoni, M.; Birlan, M.; Barucci, M. A.
2001-11-01
Recently, we have discussed the possible subdivision in few groups of a sample of 22 TNO and Centaurs for which the BVRIJ photometry were available (Barucci et al., 2001, A&A, 371,1150). We obtained this results using the multivariate statistics adopted to define the current asteroid taxonomy, namely the Principal Components Analysis and the G-mode method (Tholen & Barucci, 1989, in ASTEROIDS II). How these methods work with a very small statistical sample as the TNO/Centaurs one? Theoretically, the number of degrees of freedom of the sample is correct. In fact it is 88 in our case and have to be larger then 50 to cope with the requirements of the G-mode. Does the random sampling of the small number of members of a large population contain enough information to reveal some structure in the population? We extracted several samples of 22 asteroids out of a data-base of 86 objects of known taxonomic type for which BVRIJ photometry is available from ECAS (Zellner et al. 1985, ICARUS 61, 355), SMASS II (S.W. Bus, 1999, PhD Thesis, MIT), and the Bell et al. Atlas of the asteroid infrared spectra. The objects constituting the first sample were selected in order to give a good representation of the major asteroid taxonomic classes (at least three samples each class): C,S,D,A, and G. Both methods were able to distinguish all these groups confirming the validity of the adopted methods. The S class is hard to individuate as a consequence of the choice of I and J variables, which imply a lack of information on the absorption band at 1 micron. The other samples were obtained by random choice of the objects. Not all the major groups were well represented (less than three samples per groups), but the general trend of the asteroid taxonomy has been always obtained. We conclude that the quoted grouping of TNO/Centaurs is representative of some physico-chemical structure of the outer solar system small body population.
Identification of carbohydrate anomers using ion mobility-mass spectrometry.
Hofmann, J; Hahm, H S; Seeberger, P H; Pagel, K
2015-10-08
Carbohydrates are ubiquitous biological polymers that are important in a broad range of biological processes. However, owing to their branched structures and the presence of stereogenic centres at each glycosidic linkage between monomers, carbohydrates are harder to characterize than are peptides and oligonucleotides. Methods such as nuclear magnetic resonance spectroscopy can be used to characterize glycosidic linkages, but this technique requires milligram amounts of material and cannot detect small amounts of coexisting isomers. Mass spectrometry, on the other hand, can provide information on carbohydrate composition and connectivity for even small amounts of sample, but it cannot be used to distinguish between stereoisomers. Here, we demonstrate that ion mobility-mass spectrometry--a method that separates molecules according to their mass, charge, size, and shape--can unambiguously identify carbohydrate linkage-isomers and stereoisomers. We analysed six synthetic carbohydrate isomers that differ in composition, connectivity, or configuration. Our data show that coexisting carbohydrate isomers can be identified, and relative concentrations of the minor isomer as low as 0.1 per cent can be detected. In addition, the analysis is rapid, and requires no derivatization and only small amounts of sample. These results indicate that ion mobility-mass spectrometry is an effective tool for the analysis of complex carbohydrates. This method could have an impact on the field of carbohydrate synthesis similar to that of the advent of high-performance liquid chromatography on the field of peptide assembly in the late 1970s.
Identification of carbohydrate anomers using ion mobility-mass spectrometry
NASA Astrophysics Data System (ADS)
Hofmann, J.; Hahm, H. S.; Seeberger, P. H.; Pagel, K.
2015-10-01
Carbohydrates are ubiquitous biological polymers that are important in a broad range of biological processes. However, owing to their branched structures and the presence of stereogenic centres at each glycosidic linkage between monomers, carbohydrates are harder to characterize than are peptides and oligonucleotides. Methods such as nuclear magnetic resonance spectroscopy can be used to characterize glycosidic linkages, but this technique requires milligram amounts of material and cannot detect small amounts of coexisting isomers. Mass spectrometry, on the other hand, can provide information on carbohydrate composition and connectivity for even small amounts of sample, but it cannot be used to distinguish between stereoisomers. Here, we demonstrate that ion mobility-mass spectrometry--a method that separates molecules according to their mass, charge, size, and shape--can unambiguously identify carbohydrate linkage-isomers and stereoisomers. We analysed six synthetic carbohydrate isomers that differ in composition, connectivity, or configuration. Our data show that coexisting carbohydrate isomers can be identified, and relative concentrations of the minor isomer as low as 0.1 per cent can be detected. In addition, the analysis is rapid, and requires no derivatization and only small amounts of sample. These results indicate that ion mobility-mass spectrometry is an effective tool for the analysis of complex carbohydrates. This method could have an impact on the field of carbohydrate synthesis similar to that of the advent of high-performance liquid chromatography on the field of peptide assembly in the late 1970s.
Costa, Sofia R; Kerry, Brian R; Bardgett, Richard D; Davies, Keith G
2006-12-01
The Pasteuria group of endospore-forming bacteria has been studied as a biocontrol agent of plant-parasitic nematodes. Techniques have been developed for its detection and quantification in soil samples, and these mainly focus on observations of endospore attachment to nematodes. Characterization of Pasteuria populations has recently been performed with DNA-based techniques, which usually require the extraction of large numbers of spores. We describe a simple immunological method for the quantification and characterization of Pasteuria populations. Bayesian statistics were used to determine an extraction efficiency of 43% and a threshold of detection of 210 endospores g(-1) sand. This provided a robust means of estimating numbers of endospores in small-volume samples from a natural system. Based on visual assessment of endospore fluorescence, a quantitative method was developed to characterize endospore populations, which were shown to vary according to their host.
Sampling effects on the identification of roadkill hotspots: Implications for survey design.
Santos, Sara M; Marques, J Tiago; Lourenço, André; Medinas, Denis; Barbosa, A Márcia; Beja, Pedro; Mira, António
2015-10-01
Although locating wildlife roadkill hotspots is essential to mitigate road impacts, the influence of study design on hotspot identification remains uncertain. We evaluated how sampling frequency affects the accuracy of hotspot identification, using a dataset of vertebrate roadkills (n = 4427) recorded over a year of daily surveys along 37 km of roads. "True" hotspots were identified using this baseline dataset, as the 500-m segments where the number of road-killed vertebrates exceeded the upper 95% confidence limit of the mean, assuming a Poisson distribution of road-kills per segment. "Estimated" hotspots were identified likewise, using datasets representing progressively lower sampling frequencies, which were produced by extracting data from the baseline dataset at appropriate time intervals (1-30 days). Overall, 24.3% of segments were "true" hotspots, concentrating 40.4% of roadkills. For different groups, "true" hotspots accounted from 6.8% (bats) to 29.7% (small birds) of road segments, concentrating from <40% (frogs and toads, snakes) to >60% (lizards, lagomorphs, carnivores) of roadkills. Spatial congruence between "true" and "estimated" hotspots declined rapidly with increasing time interval between surveys, due primarily to increasing false negatives (i.e., missing "true" hotspots). There were also false positives (i.e., wrong "estimated" hotspots), particularly at low sampling frequencies. Spatial accuracy decay with increasing time interval between surveys was higher for smaller-bodied (amphibians, reptiles, small birds, small mammals) than for larger-bodied species (birds of prey, hedgehogs, lagomorphs, carnivores). Results suggest that widely used surveys at weekly or longer intervals may produce poor estimates of roadkill hotspots, particularly for small-bodied species. Surveying daily or at two-day intervals may be required to achieve high accuracy in hotspot identification for multiple species. Copyright © 2015 Elsevier Ltd. All rights reserved.
Yin, Guoyu; Hou, Lijun; Liu, Min; Liu, Zhanfei; Gardner, Wayne S
2014-08-19
Nitrogen (N) pollution in aquatic ecosystems has attracted much attention over the past decades, but the dynamics of this bioreactive element are difficult to measure in aquatic oxygen-transition environments. Nitrogen-transformation experiments often require measurement of (15)N-ammonium ((15)NH4(+)) ratios in small-volume (15)N-enriched samples. Published methods to determine N isotope ratios of dissolved ammonium require large samples and/or costly equipment and effort. We present a novel ("OX/MIMS") method to determine N isotope ratios for (15)NH4(+) in experimental waters previously enriched with (15)N compounds. Dissolved reduced (15)N (dominated by (15)NH4(+)) is oxidized with hypobromite iodine to nitrogen gas ((29)N2 and/or (30)N2) and analyzed by membrane inlet mass spectrometry (MIMS) to quantify (15)NH4(+) concentrations. The N isotope ratios, obtained by comparing the (15)NH4(+) to total ammonium (via autoanalyzer) concentrations, are compared to the ratios of prepared standards. The OX/MIMS method requires only small sample volumes of water (ca. 12 mL) or sediment slurries and is rapid, convenient, accurate, and precise (R(2) = 0.9994, p < 0.0001) over a range of salinities and (15)N/(14)N ratios. It can provide data needed to quantify rates of ammonium regeneration, potential ammonium uptake, and dissimilatory nitrate reduction to ammonium (DNRA). Isotope ratio results agreed closely (R = 0.998, P = 0.001) with those determined independently by isotope ratio mass spectrometry for DNRA measurements or by ammonium isotope retention time shift liquid chromatography for water-column N-cycling experiments. Application of OX/MIMS should simplify experimental approaches and improve understanding of N-cycling rates and fate in a variety of freshwater and marine environments.
McPhail, S M; O'Hara, M; Gane, E; Tonks, P; Bullock-Saxton, J; Kuys, S S
2016-06-01
The Nintendo Wii Fit integrates virtual gaming with body movement, and may be suitable as an adjunct to conventional physiotherapy following lower limb fractures. This study examined the feasibility and safety of using the Wii Fit as an adjunct to outpatient physiotherapy following lower limb fractures, and reports sample size considerations for an appropriately powered randomised trial. Ambulatory patients receiving physiotherapy following a lower limb fracture participated in this study (n=18). All participants received usual care (individual physiotherapy). The first nine participants also used the Wii Fit under the supervision of their treating clinician as an adjunct to usual care. Adverse events, fracture malunion or exacerbation of symptoms were recorded. Pain, balance and patient-reported function were assessed at baseline and discharge from physiotherapy. No adverse events were attributed to either the usual care physiotherapy or Wii Fit intervention for any patient. Overall, 15 (83%) participants completed both assessments and interventions as scheduled. For 80% power in a clinical trial, the number of complete datasets required in each group to detect a small, medium or large effect of the Wii Fit at a post-intervention assessment was calculated at 175, 63 and 25, respectively. The Nintendo Wii Fit was safe and feasible as an adjunct to ambulatory physiotherapy in this sample. When considering a likely small effect size and the 17% dropout rate observed in this study, 211 participants would be required in each clinical trial group. A larger effect size or multiple repeated measures design would require fewer participants. Copyright © 2015 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
Cortez, Juliana; Pasquini, Celio
2013-02-05
The ring-oven technique, originally applied for classical qualitative analysis in the years 1950s to 1970s, is revisited to be used in a simple though highly efficient and green procedure for analyte preconcentration prior to its determination by the microanalytical techniques presently available. The proposed preconcentration technique is based on the dropwise delivery of a small volume of sample to a filter paper substrate, assisted by a flow-injection-like system. The filter paper is maintained in a small circular heated oven (the ring oven). Drops of the sample solution diffuse by capillarity from the center to a circular area of the paper substrate. After the total sample volume has been delivered, a ring with a sharp (c.a. 350 μm) circular contour, of about 2.0 cm diameter, is formed on the paper to contain most of the analytes originally present in the sample volume. Preconcentration coefficients of the analyte can reach 250-fold (on a m/m basis) for a sample volume as small as 600 μL. The proposed system and procedure have been evaluated to concentrate Na, Fe, and Cu in fuel ethanol, followed by simultaneous direct determination of these species in the ring contour, employing the microanalytical technique of laser induced breakdown spectroscopy (LIBS). Detection limits of 0.7, 0.4, and 0.3 μg mL(-1) and mean recoveries of (109 ± 13)%, (92 ± 18)%, and (98 ± 12)%, for Na, Fe, and Cu, respectively, were obtained in fuel ethanol. It is possible to anticipate the application of the technique, coupled to modern microanalytical and multianalyte techniques, to several analytical problems requiring analyte preconcentration and/or sample stabilization.
Devices for SRF material characterization
Goudket, Philippe; Xiao, B.; Junginger, T.
2016-10-07
The surface resistance Rs of superconducting materials can be obtained by measuring the quality factor of an elliptical cavity excited in a transverse magnetic mode (TM010). The value obtained has however to be taken as averaged over the whole surface. A more convenient way to obtain Rs, especially of materials which are not yet technologically ready for cavity production, is to measure small samples instead. These can be easily man ufactured at low cost, duplicated and placed in film deposition and surface analytical tools. A commonly used design for a device to measure Rs consists of a cylindrical cavity excitedmore » in a transverse electric (TE110) mode with the sample under test serving as one replaceable endplate. Such a cavity has two drawbacks. For reasonably small samples the resonant frequency will be larger than frequencies of interest concerning SRF application and it requires a reference sample of known Rs. In this article we review several devices which have been designed to overcome these limitations, reaching sub - nΩ resolution in some cases. Some of these devices also comprise a parameter space in frequency and temperature which is inaccessible to standard cavity tests, making them ideal tools to test theoretical surface resistance models.« less
Microextraction techniques combined with capillary electrophoresis in bioanalysis.
Kohler, Isabelle; Schappler, Julie; Rudaz, Serge
2013-01-01
Over the past two decades, many environmentally sustainable sample-preparation techniques have been proposed, with the objective of reducing the use of toxic organic solvents or substituting these with environmentally friendly alternatives. Microextraction techniques (MEs), in which only a small amount of organic solvent is used, have several advantages, including reduced sample volume, analysis time, and operating costs. Thus, MEs are well adapted in bioanalysis, in which sample preparation is mandatory because of the complexity of a sample that is available in small quantities (mL or even μL only). Capillary electrophoresis (CE) is a powerful and efficient separation technique in which no organic solvents are required for analysis. Combination of CE with MEs is regarded as a very attractive environmentally sustainable analytical tool, and numerous applications have been reported over the last few decades for bioanalysis of low-molecular-weight compounds or for peptide analysis. In this paper we review the use of MEs combined with CE in bioanalysis. The review is divided into two sections: liquid and solid-based MEs. A brief practical and theoretical description of each ME is given, and the techniques are illustrated by relevant applications.
Devices for SRF material characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goudket, Philippe; Xiao, B.; Junginger, T.
The surface resistance Rs of superconducting materials can be obtained by measuring the quality factor of an elliptical cavity excited in a transverse magnetic mode (TM010). The value obtained has however to be taken as averaged over the whole surface. A more convenient way to obtain Rs, especially of materials which are not yet technologically ready for cavity production, is to measure small samples instead. These can be easily man ufactured at low cost, duplicated and placed in film deposition and surface analytical tools. A commonly used design for a device to measure Rs consists of a cylindrical cavity excitedmore » in a transverse electric (TE110) mode with the sample under test serving as one replaceable endplate. Such a cavity has two drawbacks. For reasonably small samples the resonant frequency will be larger than frequencies of interest concerning SRF application and it requires a reference sample of known Rs. In this article we review several devices which have been designed to overcome these limitations, reaching sub - nΩ resolution in some cases. Some of these devices also comprise a parameter space in frequency and temperature which is inaccessible to standard cavity tests, making them ideal tools to test theoretical surface resistance models.« less
A prototype splitter apparatus for dividing large catches of small fish
Stapanian, Martin A.; Edwards, William H.
2012-01-01
Due to financial and time constraints, it is often necessary in fisheries studies to divide large samples of fish and estimate total catch from the subsample. The subsampling procedure may involve potential human biases or may be difficult to perform in rough conditions. We present a prototype gravity-fed splitter apparatus for dividing large samples of small fish (30–100 mm TL). The apparatus features a tapered hopper with a sliding and removable shutter. The apparatus provides a comparatively stable platform for objectively obtaining subsamples, and it can be modified to accommodate different sizes of fish and different sample volumes. The apparatus is easy to build, inexpensive, and convenient to use in the field. To illustrate the performance of the apparatus, we divided three samples (total N = 2,000 fish) composed of four fish species. Our results indicated no significant bias in estimating either the number or proportion of each species from the subsample. Use of this apparatus or a similar apparatus can help to standardize subsampling procedures in large surveys of fish. The apparatus could be used for other applications that require dividing a large amount of material into one or more smaller subsamples.
Accounting for randomness in measurement and sampling in studying cancer cell population dynamics.
Ghavami, Siavash; Wolkenhauer, Olaf; Lahouti, Farshad; Ullah, Mukhtar; Linnebacher, Michael
2014-10-01
Knowing the expected temporal evolution of the proportion of different cell types in sample tissues gives an indication about the progression of the disease and its possible response to drugs. Such systems have been modelled using Markov processes. We here consider an experimentally realistic scenario in which transition probabilities are estimated from noisy cell population size measurements. Using aggregated data of FACS measurements, we develop MMSE and ML estimators and formulate two problems to find the minimum number of required samples and measurements to guarantee the accuracy of predicted population sizes. Our numerical results show that the convergence mechanism of transition probabilities and steady states differ widely from the real values if one uses the standard deterministic approach for noisy measurements. This provides support for our argument that for the analysis of FACS data one should consider the observed state as a random variable. The second problem we address is about the consequences of estimating the probability of a cell being in a particular state from measurements of small population of cells. We show how the uncertainty arising from small sample sizes can be captured by a distribution for the state probability.
NASA Astrophysics Data System (ADS)
Xie, Xing; Bahnemann, Janina; Wang, Siwen; Yang, Yang; Hoffmann, Michael R.
2016-02-01
Detection and quantification of pathogens in water is critical for the protection of human health and for drinking water safety and security. When the pathogen concentrations are low, large sample volumes (several liters) are needed to achieve reliable quantitative results. However, most microbial identification methods utilize relatively small sample volumes. As a consequence, a concentration step is often required to detect pathogens in natural waters. Herein, we introduce a novel water sample concentration method based on superabsorbent polymer (SAP) beads. When SAP beads swell with water, small molecules can be sorbed within the beads, but larger particles are excluded and, thus, concentrated in the residual non-sorbed water. To illustrate this approach, millimeter-sized poly(acrylamide-co-itaconic acid) (P(AM-co-IA)) beads are synthesized and successfully applied to concentrate water samples containing two model microorganisms: Escherichia coli and bacteriophage MS2. Experimental results indicate that the size of the water channel within water swollen P(AM-co-IA) hydrogel beads is on the order of several nanometers. The millimeter size coupled with a negative surface charge of the beads are shown to be critical in order to achieve high levels of concentration. This new concentration procedure is very fast, effective, scalable, and low-cost with no need for complex instrumentation.
High-Precision Isotope Ratio Measurements of Sub-Picogram Actinide Samples
NASA Astrophysics Data System (ADS)
Pollington, A. D.; Kinman, W.
2016-12-01
One of the most exciting trends in analytical geochemistry over the past decade is the push towards smaller and smaller sample sizes while simultaneously achieving high precision isotope ratio measurements. This trend has been driven by advances in clean chemistry protocols, and by significant breakthroughs in mass spectrometer ionization efficiency and detector quality (stability and noise for low signals). In this presentation I will focus on new techniques currently being developed at Los Alamos National Laboratory for the characterization of ultra-small samples (pg, fg, ag), with particular focus on actinide measurements by MC-ICP-MS. Analyses of U, Pu, Th and Am are routinely carried out in our facility using multi-ion counting techniques. I will describe some of the challenges associated with using exclusively ion counting methods (e.g., stability, detector cross calibration, etc.), and how we work to mitigate them. While the focus of much of the work currently being carried out is in the broad field of nuclear forensics and safeguards, the techniques that are being developed are directly applicable to many geologic questions that require analyses of small samples of U and Th, for example. In addition to the description of the technique development, I will present case studies demonstrating the precision and accuracy of the method as applied to real-world samples.
Xie, Xing; Bahnemann, Janina; Wang, Siwen; Yang, Yang; Hoffmann, Michael R
2016-02-15
Detection and quantification of pathogens in water is critical for the protection of human health and for drinking water safety and security. When the pathogen concentrations are low, large sample volumes (several liters) are needed to achieve reliable quantitative results. However, most microbial identification methods utilize relatively small sample volumes. As a consequence, a concentration step is often required to detect pathogens in natural waters. Herein, we introduce a novel water sample concentration method based on superabsorbent polymer (SAP) beads. When SAP beads swell with water, small molecules can be sorbed within the beads, but larger particles are excluded and, thus, concentrated in the residual non-sorbed water. To illustrate this approach, millimeter-sized poly(acrylamide-co-itaconic acid) (P(AM-co-IA)) beads are synthesized and successfully applied to concentrate water samples containing two model microorganisms: Escherichia coli and bacteriophage MS2. Experimental results indicate that the size of the water channel within water swollen P(AM-co-IA) hydrogel beads is on the order of several nanometers. The millimeter size coupled with a negative surface charge of the beads are shown to be critical in order to achieve high levels of concentration. This new concentration procedure is very fast, effective, scalable, and low-cost with no need for complex instrumentation.
Xie, Xing; Bahnemann, Janina; Wang, Siwen; Yang, Yang; Hoffmann, Michael R.
2016-01-01
Detection and quantification of pathogens in water is critical for the protection of human health and for drinking water safety and security. When the pathogen concentrations are low, large sample volumes (several liters) are needed to achieve reliable quantitative results. However, most microbial identification methods utilize relatively small sample volumes. As a consequence, a concentration step is often required to detect pathogens in natural waters. Herein, we introduce a novel water sample concentration method based on superabsorbent polymer (SAP) beads. When SAP beads swell with water, small molecules can be sorbed within the beads, but larger particles are excluded and, thus, concentrated in the residual non-sorbed water. To illustrate this approach, millimeter-sized poly(acrylamide-co-itaconic acid) (P(AM-co-IA)) beads are synthesized and successfully applied to concentrate water samples containing two model microorganisms: Escherichia coli and bacteriophage MS2. Experimental results indicate that the size of the water channel within water swollen P(AM-co-IA) hydrogel beads is on the order of several nanometers. The millimeter size coupled with a negative surface charge of the beads are shown to be critical in order to achieve high levels of concentration. This new concentration procedure is very fast, effective, scalable, and low-cost with no need for complex instrumentation. PMID:26876979
Schillaci, Michael A; Schillaci, Mario E
2009-02-01
The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (n<10) or very small (n < or = 5) sample sizes. This method can be used by researchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.
Mauro, Francisco; Monleon, Vicente J; Temesgen, Hailemariam; Ford, Kevin R
2017-01-01
Forest inventories require estimates and measures of uncertainty for subpopulations such as management units. These units often times hold a small sample size, so they should be regarded as small areas. When auxiliary information is available, different small area estimation methods have been proposed to obtain reliable estimates for small areas. Unit level empirical best linear unbiased predictors (EBLUP) based on plot or grid unit level models have been studied more thoroughly than area level EBLUPs, where the modelling occurs at the management unit scale. Area level EBLUPs do not require a precise plot positioning and allow the use of variable radius plots, thus reducing fieldwork costs. However, their performance has not been examined thoroughly. We compared unit level and area level EBLUPs, using LiDAR auxiliary information collected for inventorying 98,104 ha coastal coniferous forest. Unit level models were consistently more accurate than area level EBLUPs, and area level EBLUPs were consistently more accurate than field estimates except for large management units that held a large sample. For stand density, volume, basal area, quadratic mean diameter, mean height and Lorey's height, root mean squared errors (rmses) of estimates obtained using area level EBLUPs were, on average, 1.43, 2.83, 2.09, 1.40, 1.32 and 1.64 times larger than those based on unit level estimates, respectively. Similarly, direct field estimates had rmses that were, on average, 1.37, 1.45, 1.17, 1.17, 1.26, and 1.38 times larger than rmses of area level EBLUPs. Therefore, area level models can lead to substantial gains in accuracy compared to direct estimates, and unit level models lead to very important gains in accuracy compared to area level models, potentially justifying the additional costs of obtaining accurate field plot coordinates.
Monleon, Vicente J.; Temesgen, Hailemariam; Ford, Kevin R.
2017-01-01
Forest inventories require estimates and measures of uncertainty for subpopulations such as management units. These units often times hold a small sample size, so they should be regarded as small areas. When auxiliary information is available, different small area estimation methods have been proposed to obtain reliable estimates for small areas. Unit level empirical best linear unbiased predictors (EBLUP) based on plot or grid unit level models have been studied more thoroughly than area level EBLUPs, where the modelling occurs at the management unit scale. Area level EBLUPs do not require a precise plot positioning and allow the use of variable radius plots, thus reducing fieldwork costs. However, their performance has not been examined thoroughly. We compared unit level and area level EBLUPs, using LiDAR auxiliary information collected for inventorying 98,104 ha coastal coniferous forest. Unit level models were consistently more accurate than area level EBLUPs, and area level EBLUPs were consistently more accurate than field estimates except for large management units that held a large sample. For stand density, volume, basal area, quadratic mean diameter, mean height and Lorey’s height, root mean squared errors (rmses) of estimates obtained using area level EBLUPs were, on average, 1.43, 2.83, 2.09, 1.40, 1.32 and 1.64 times larger than those based on unit level estimates, respectively. Similarly, direct field estimates had rmses that were, on average, 1.37, 1.45, 1.17, 1.17, 1.26, and 1.38 times larger than rmses of area level EBLUPs. Therefore, area level models can lead to substantial gains in accuracy compared to direct estimates, and unit level models lead to very important gains in accuracy compared to area level models, potentially justifying the additional costs of obtaining accurate field plot coordinates. PMID:29216290
Frequency Diversity for Improving Synthetic Aperture Radar Imaging
2009-03-01
for broadside spotlight SAR imaging is shown to be δθ = λ 4Yo . (2.34) When θ is small, as is often the case in spotlight SAR imaging, the required...maximum distance ∆y between samples along the y-axis is shown to be ∆y ≤ λRc 4Yo . (2.35) With platform velocity vy along the y-axis, the minimum PRF is
Bellier, Edwige; Grøtan, Vidar; Engen, Steinar; Schartau, Ann Kristin; Diserud, Ola H; Finstad, Anders G
2012-10-01
Obtaining accurate estimates of diversity indices is difficult because the number of species encountered in a sample increases with sampling intensity. We introduce a novel method that requires that the presence of species in a sample to be assessed while the counts of the number of individuals per species are only required for just a small part of the sample. To account for species included as incidence data in the species abundance distribution, we modify the likelihood function of the classical Poisson log-normal distribution. Using simulated community assemblages, we contrast diversity estimates based on a community sample, a subsample randomly extracted from the community sample, and a mixture sample where incidence data are added to a subsample. We show that the mixture sampling approach provides more accurate estimates than the subsample and at little extra cost. Diversity indices estimated from a freshwater zooplankton community sampled using the mixture approach show the same pattern of results as the simulation study. Our method efficiently increases the accuracy of diversity estimates and comprehension of the left tail of the species abundance distribution. We show how to choose the scale of sample size needed for a compromise between information gained, accuracy of the estimates and cost expended when assessing biological diversity. The sample size estimates are obtained from key community characteristics, such as the expected number of species in the community, the expected number of individuals in a sample and the evenness of the community.
Method and Apparatus for Automated Isolation of Nucleic Acids from Small Cell Samples
NASA Technical Reports Server (NTRS)
Sundaram, Shivshankar; Prabhakarpandian, Balabhaskar; Pant, Kapil; Wang, Yi
2014-01-01
RNA isolation is a ubiquitous need, driven by current emphasis on microarrays and miniaturization. With commercial systems requiring 100,000 to 1,000,000 cells for successful isolation, there is a growing need for a small-footprint, easy-to-use device that can harvest nucleic acids from much smaller cell samples (1,000 to 10,000 cells). The process of extraction of RNA from cell cultures is a complex, multi-step one, and requires timed, asynchronous operations with multiple reagents/buffers. An added complexity is the fragility of RNA (subject to degradation) and its reactivity to surface. A novel, microfluidics-based, integrated cartridge has been developed that can fully automate the complex process of RNA isolation (lyse, capture, and elute RNA) from small cell culture samples. On-cartridge cell lysis is achieved using either reagents or high-strength electric fields made possible by the miniaturized format. Traditionally, silica-based, porous-membrane formats have been used for RNA capture, requiring slow perfusion for effective capture. In this design, high efficiency capture/elution are achieved using a microsphere-based "microfluidized" format. Electrokinetic phenomena are harnessed to actively mix microspheres with the cell lysate and capture/elution buffer, providing important advantages in extraction efficiency, processing time, and operational flexibility. Successful RNA isolation was demonstrated using both suspension (HL-60) and adherent (BHK-21) cells. Novel features associated with this development are twofold. First, novel designs that execute needed processes with improved speed and efficiency were developed. These primarily encompass electric-field-driven lysis of cells. The configurations include electrode-containing constructs, or an "electrode-less" chip design, which is easy to fabricate and mitigates fouling at the electrode surface; and the "fluidized" extraction format based on electrokinetically assisted mixing and contacting of microbeads in a shape-optimized chamber. A secondary proprietary feature is in the particular layout integrating these components to perform the desired operation of RNA isolation. Apart from a novel functional capability, advantages of the innovation include reduced or eliminated use of toxic reagents, and operator-independent extraction of RNA.
Yao, Chenxi; Wang, Tao; Zhang, Buqing; He, Dacheng; Na, Na; Ouyang, Jin
2015-11-01
The interaction between bioactive small molecule ligands and proteins is one of the important research areas in proteomics. Herein, a simple and rapid method is established to screen small ligands that bind to proteins. We designed an agarose slide to immobilize different proteins. The protein microarrays were allowed to interact with different small ligands, and after washing, the microarrays were screened by desorption electrospray ionization mass spectrometry (DESI MS). This method can be applied to screen specific protein binding ligands and was shown for seven proteins and 34 known ligands for these proteins. In addition, a high-throughput screening was achieved, with the analysis requiring approximately 4 s for one sample spot. We then applied this method to determine the binding between the important protein matrix metalloproteinase-9 (MMP-9) and 88 small compounds. The molecular docking results confirmed the MS results, demonstrating that this method is suitable for the rapid and accurate screening of ligands binding to proteins. Graphical Abstract ᅟ.
NASA Astrophysics Data System (ADS)
Ranamukhaarachchi, Sahan A.; Padeste, Celestino; Dübner, Matthias; Häfeli, Urs O.; Stoeber, Boris; Cadarso, Victor J.
2016-07-01
Therapeutic drug monitoring (TDM) typically requires painful blood drawn from patients. We propose a painless and minimally-invasive alternative for TDM using hollow microneedles suitable to extract extremely small volumes (<1 nL) of interstitial fluid to measure drug concentrations. The inner lumen of a microneedle is functionalized to be used as a micro-reactor during sample collection to trap and bind target drug candidates during extraction, without requirements of sample transfer. An optofluidic device is integrated with this microneedle to rapidly quantify drug analytes with high sensitivity using a straightforward absorbance scheme. Vancomycin is currently detected by using volumes ranging between 50-100 μL with a limit of detection (LoD) of 1.35 μM. The proposed microneedle-optofluidic biosensor can detect vancomycin with a sample volume of 0.6 nL and a LoD of <100 nM, validating this painless point of care system with significant potential to reduce healthcare costs and patients suffering.
Progress Towards an Indirect Neutron Capture Capability at LANSCE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koehler, Paul E.; Ullmann, John Leonard; Couture, Aaron Joseph
2017-09-20
There are many neutron-capture cross sections of importance to radiochemical diagnostics and nuclear forensics which are beyond the reach of direct measurements. Hence, we have been developing an apparatus on flight path (FP) 13 at target 1 at LANSCE for tightly constraining these cross sections via determination of the underlying physical quantities. FP-13 was initially a cold-neutron beam line for materials science and therefore required substantial modification for use for nuclear physics. In FY17, we made several improvements to FP-13, demonstrated improved performance due to these changes via measurements on a variety of samples, identified a few more needed improvements,more » and reconfigured the beam line to implement the most important of these. New measurements to assess the impact of the most recent improvement will commence when beam is restored to LANSCE. Although FP-13 has not yet reached the performance required for small radioactive samples, measurements on a gold sample have led to an important science result which we are preparing for publication.« less
Vig, Asger Laurberg; Haldrup, Kristoffer; Enevoldsen, Nikolaj; Thilsted, Anil Haraksingh; Eriksen, Johan; Kristensen, Anders; Feidenhans'l, Robert; Nielsen, Martin Meedom
2009-11-01
We propose and describe a microfluidic system for high intensity x-ray measurements. The required open access to a microfluidic channel is provided by an out-of-plane capillary burst valve (CBV). The functionality of the out-of-plane CBV is characterized with respect to the diameter of the windowless access hole, ranging from 10 to 130 microm. Maximum driving pressures from 22 to 280 mbar corresponding to refresh rates of the exposed sample from 300 Hz to 54 kHz is demonstrated. The microfluidic system is tested at beamline ID09b at the ESRF synchrotron radiation facility in Grenoble, and x-ray scattering measurements are shown to be feasible and to require only very limited amounts of sample, <1 ml/h of measurements without recapturing of sample. With small adjustments of the present chip design, scattering angles up to 30 degrees can be achieved without shadowing effects and integration on-chip mixing and spectroscopy appears straightforward.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vig, Asger Laurberg; Enevoldsen, Nikolaj; Thilsted, Anil Haraksingh
2009-11-15
We propose and describe a microfluidic system for high intensity x-ray measurements. The required open access to a microfluidic channel is provided by an out-of-plane capillary burst valve (CBV). The functionality of the out-of-plane CBV is characterized with respect to the diameter of the windowless access hole, ranging from 10 to 130 {mu}m. Maximum driving pressures from 22 to 280 mbar corresponding to refresh rates of the exposed sample from 300 Hz to 54 kHz is demonstrated. The microfluidic system is tested at beamline ID09b at the ESRF synchrotron radiation facility in Grenoble, and x-ray scattering measurements are shown tomore » be feasible and to require only very limited amounts of sample, <1 ml/h of measurements without recapturing of sample. With small adjustments of the present chip design, scattering angles up to 30 deg. can be achieved without shadowing effects and integration on-chip mixing and spectroscopy appears straightforward.« less
Ranamukhaarachchi, Sahan A.; Padeste, Celestino; Dübner, Matthias; Häfeli, Urs O.; Stoeber, Boris; Cadarso, Victor J.
2016-01-01
Therapeutic drug monitoring (TDM) typically requires painful blood drawn from patients. We propose a painless and minimally-invasive alternative for TDM using hollow microneedles suitable to extract extremely small volumes (<1 nL) of interstitial fluid to measure drug concentrations. The inner lumen of a microneedle is functionalized to be used as a micro-reactor during sample collection to trap and bind target drug candidates during extraction, without requirements of sample transfer. An optofluidic device is integrated with this microneedle to rapidly quantify drug analytes with high sensitivity using a straightforward absorbance scheme. Vancomycin is currently detected by using volumes ranging between 50–100 μL with a limit of detection (LoD) of 1.35 μM. The proposed microneedle-optofluidic biosensor can detect vancomycin with a sample volume of 0.6 nL and a LoD of <100 nM, validating this painless point of care system with significant potential to reduce healthcare costs and patients suffering. PMID:27380889
Urakami, K; Saito, Y; Fujiwara, Y; Watanabe, C; Umemoto, K; Godo, M; Hashimoto, K
2000-12-01
Thermal desorption (TD) techniques followed by capillary GC/MS were applied for the analysis of residual solvents in bulk pharmaceuticals. Solvents desorbed from samples by heating were cryofocused at the head of a capillary column prior to GC/MS analysis. This method requires a very small amount of sample and no sample pretreatment. Desorption temperature was set at the point about 20 degrees C higher than the melting point of each sample individually. The relative standard deviations of this method tested by performing six consecutive analyses of 8 different samples were 1.1 to 3.1%, and analytical results of residual solvents were in agreement with those obtained by direct injection of N,N-dimethylformamide solution of the samples into the GC. This novel TD/GC/MS method was demonstrated to be very useful for the identification and quantification of residual solvents in bulk pharmaceuticals.
Nanoengineered capsules for selective SERS analysis of biological samples
NASA Astrophysics Data System (ADS)
You, Yil-Hwan; Schechinger, Monika; Locke, Andrea; Coté, Gerard; McShane, Mike
2018-02-01
Metal nanoparticles conjugated with DNA oligomers have been intensively studied for a variety of applications, including optical diagnostics. Assays based on aggregation of DNA-coated particles in proportion to the concentration of target analyte have not been widely adopted for clinical analysis, however, largely due to the nonspecific responses observed in complex biofluids. While sample pre-preparation such as dialysis is helpful to enable selective sensing, here we sought to prove that assay encapsulation in hollow microcapsules could remove this requirement and thereby facilitate more rapid analysis on complex samples. Gold nanoparticle-based assays were incorporated into capsules comprising polyelectrolyte multilayer (PEMs), and the response to small molecule targets and larger proteins were compared. Gold nanoparticles were able to selectively sense small Raman dyes (Rhodamine 6G) in the presence of large protein molecules (BSA) when encapsulated. A ratiometric based microRNA-17 sensing assay exhibited drastic reduction in response after encapsulation, with statistically-significant relative Raman intensity changes only at a microRNA-17 concentration of 10 nM compared to a range of 0-500 nM for the corresponding solution-phase response.
Patient satisfaction with nursing staff in bone marrow transplantation and hematology units.
Piras, A; Poddigue, M; Angelucci, E
2010-01-01
Several validated questionnaires for assessment of hospitalized patient satisfaction have been reported in the literature. Many have been designed specifically for patients with cancer. User satisfaction is one indicator of service quality and benefits. Thus, we conducted a small qualitative survey managed by nursing staff in our Bone Marrow Transplantation Unit and Acute Leukemia Unit, with the objectives of assessing patient satisfaction, determining critical existing problems, and developing required interventions. The sample was not probabilistic. A questionnaire was developed using the Delphi method in a pilot study with 30 patients. Analysis of the data suggested a good level of patient satisfaction with medical and nursing staffs (100%), but poor satisfaction with food (48%), services (38%), and amenities (31%). Limitations of the study were that the questionnaire was unvalidated and the sample was small. However, for the first time, patient satisfaction was directly measured at our hospital. Another qualitative study will be conducted after correction of the critical points that emerged during this initial study, in a larger sample of patients. Copyright 2010 Elsevier Inc. All rights reserved.
Bacterial Contamination into Ready-to-Eat Foods Sold in Middle Thailand.
Ananchaipattana, Chiraporn; Bari, Md Latiful; Inatsu, Yasuhiro
2016-01-01
Bacterial contamination in ready-to-eat (RTE) foods prepared and sold by small businesses in middle Thailand was surveyed. The 135 samples were randomly purchased from open markets and supermarkets in Bangkok and Pathum Thani provinces during May 2013 to February 2015. The results indicated that the rate of contamination in cooked pork, chicken meat and cooked aquatic items were 13/39 (33%), 18/45 (40%) and 14/57 (25%), respectively and the rate of bacterial contamination of collected samples from open market and supermarket were 26/67 (39%) and 19/68 (27%), respectively. Therefore, no statistically significant difference of contamination rate between two kinds of market or among three categories of food. The most contaminated pathogenic bacteria was Staphylococcus aureus 27/135 (20%) while that of Salmonella spp. was the lowest 5/135 (4%) in each categories of collected food samples. Implementation of suitable hygienic practices in the small food businesses are thought to be required to reduce the risk of foodborne illnesses caused by the consumption of RTE foods sold in middle Thailand.
Filter Paper-based Nucleic Acid Storage in High-throughput Solid Tumor Genotyping.
Stachler, Matthew; Jia, Yonghui; Sharaf, Nematullah; Wade, Jacqueline; Longtine, Janina; Garcia, Elizabeth; Sholl, Lynette M
2015-01-01
Molecular testing of tumors from formalin-fixed paraffin-embedded (FFPE) tissue blocks is central to clinical practice; however, it requires histology support and increases test turnaround time. Prospective fresh frozen tissue collection requires special handling, additional storage space, and may not be feasible for small specimens. Filter paper-based collection of tumor DNA reduces the need for histology support, requires little storage space, and preserves high-quality nucleic acid. We investigated the performance of tumor smears on filter paper in solid tumor genotyping, as compared with paired FFPE samples. Whatman FTA Micro Card (FTA preps) smears were prepared from 21 fresh tumor samples. A corresponding cytology smear was used to assess tumor cellularity and necrosis. DNA was isolated from FTA preps and FFPE core samples using automated methods and quantified using SYBR green dsDNA detection. Samples were genotyped for 471 mutations on a mass spectrophotometry-based platform (Sequenom). DNA concentrations from FTA preps and FFPE correlated for untreated carcinomas but not for mesenchymal tumors (Spearman σ=0.39 and σ=-0.1, respectively). Average DNA concentrations were lower from FTA preps as compared with FFPE, but DNA quality was higher with less fragmentation. Seventy-six percent of FTA preps and 86% of FFPE samples generated adequate DNA for genotyping. FTA preps tended to perform poorly for collection of DNA from pretreated carcinomas and mesenchymal neoplasms. Of the 16 paired DNA samples that were genotyped, 15 (94%) gave entirely concordant results. Filter paper-based sample preservation is a feasible alternative to FFPE for use in automated, high-throughput genotyping of carcinomas.
Advances in arsenic biosensor development--a comprehensive review.
Kaur, Hardeep; Kumar, Rabindra; Babu, J Nagendra; Mittal, Sunil
2015-01-15
Biosensors are analytical devices having high sensitivity, portability, small sample requirement and ease of use for qualitative and quantitative monitoring of various analytes of human importance. Arsenic (As), owing to its widespread presence in nature and high toxicity to living creatures, requires frequent determination in water, soil, agricultural and food samples. The present review is an effort to highlight the various advancements made so far in the development of arsenic biosensors based either on recombinant whole cells or on certain arsenic-binding oligonucleotides or proteins. The role of futuristic approaches like surface plasmon resonance (SPR) and aptamer technology has also been discussed. The biomethods employed and their general mechanisms, advantages and limitations in relevance to arsenic biosensors developed so far are intended to be discussed in this review. Copyright © 2014 Elsevier B.V. All rights reserved.
Exploiting molecular dynamics in Nested Sampling simulations of small peptides
NASA Astrophysics Data System (ADS)
Burkoff, Nikolas S.; Baldock, Robert J. N.; Várnai, Csilla; Wild, David L.; Csányi, Gábor
2016-04-01
Nested Sampling (NS) is a parameter space sampling algorithm which can be used for sampling the equilibrium thermodynamics of atomistic systems. NS has previously been used to explore the potential energy surface of a coarse-grained protein model and has significantly outperformed parallel tempering when calculating heat capacity curves of Lennard-Jones clusters. The original NS algorithm uses Monte Carlo (MC) moves; however, a variant, Galilean NS, has recently been introduced which allows NS to be incorporated into a molecular dynamics framework, so NS can be used for systems which lack efficient prescribed MC moves. In this work we demonstrate the applicability of Galilean NS to atomistic systems. We present an implementation of Galilean NS using the Amber molecular dynamics package and demonstrate its viability by sampling alanine dipeptide, both in vacuo and implicit solvent. Unlike previous studies of this system, we present the heat capacity curves of alanine dipeptide, whose calculation provides a stringent test for sampling algorithms. We also compare our results with those calculated using replica exchange molecular dynamics (REMD) and find good agreement. We show the computational effort required for accurate heat capacity estimation for small peptides. We also calculate the alanine dipeptide Ramachandran free energy surface for a range of temperatures and use it to compare the results using the latest Amber force field with previous theoretical and experimental results.
Bacterial identification in real samples by means of micro-Raman spectroscopy
NASA Astrophysics Data System (ADS)
Rösch, Petra; Stöckel, Stephan; Meisel, Susann; Bossecker, Anja; Münchberg, Ute; Kloss, Sandra; Schumacher, Wilm; Popp, Jürgen
2011-07-01
Pathogen detection is essential without time delay especially for severe diseases like sepsis. Here, the survival rate is dependent on a prompt antibiosis. For sepsis three hours after the onset of shock the survival rate of the patient drops below 60 %. Unfortunately, the results from standard diagnosis methods like PCR or microbiology can normally be received after 12 or 36 h, respectively. Therefore diagnosis methods which require less cultivation or even no cultivation at all have to be established for medical diagnosis. Here, Raman spectroscopy, as a vibrational spectroscopic method, is a very sensitive and selective approach and monitors the biochemical composition of the investigated sample. Applying micro-Raman spectroscopy allows for a spatial resolution below 1 μm and is therefore in the size range of bacteria. Raman spectra of bacteria depend on the physiological status. Therefore, the databases require the inclusion of the necessary environmental parameters such as temperature, pH, nutrition, etc. Such large databases therefore require a specialized chemometric approach, since the variation between different strains is small. In this contribution we will demonstrate the capability of Raman spectroscopy to identify pathogens without cultivation even from real environmental or medical samples.
NASA Technical Reports Server (NTRS)
Carson, John M., III; Bayard, David S.
2006-01-01
G-SAMPLE is an in-flight dynamical method for use by sample collection missions to identify the presence and quantity of collected sample material. The G-SAMPLE method implements a maximum-likelihood estimator to identify the collected sample mass, based on onboard force sensor measurements, thruster firings, and a dynamics model of the spacecraft. With G-SAMPLE, sample mass identification becomes a computation rather than an extra hardware requirement; the added cost of cameras or other sensors for sample mass detection is avoided. Realistic simulation examples are provided for a spacecraft configuration with a sample collection device mounted on the end of an extended boom. In one representative example, a 1000 gram sample mass is estimated to within 110 grams (95% confidence) under realistic assumptions of thruster profile error, spacecraft parameter uncertainty, and sensor noise. For convenience to future mission design, an overall sample-mass estimation error budget is developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.
Human and Robotic Mission to Small Bodies: Mapping, Planning and Exploration
NASA Technical Reports Server (NTRS)
Neffian, Ara V.; Bellerose, Julie; Beyer, Ross A.; Archinal, Brent; Edwards, Laurence; Lee, Pascal; Colaprete, Anthony; Fong, Terry
2013-01-01
This study investigates the requirements, performs a gap analysis and makes a set of recommendations for mapping products and exploration tools required to support operations and scientific discovery for near- term and future NASA missions to small bodies. The mapping products and their requirements are based on the analysis of current mission scenarios (rendezvous, docking, and sample return) and recommendations made by the NEA Users Team (NUT) in the framework of human exploration. The mapping products that sat- isfy operational, scienti c, and public outreach goals include topography, images, albedo, gravity, mass, density, subsurface radar, mineralogical and thermal maps. The gap analysis points to a need for incremental generation of mapping products from low (flyby) to high-resolution data needed for anchoring and docking, real-time spatial data processing for hazard avoidance and astronaut or robot localization in low gravity, high dynamic environments, and motivates a standard for coordinate reference systems capable of describing irregular body shapes. Another aspect investigated in this study is the set of requirements and the gap analysis for exploration tools that support visualization and simulation of operational conditions including soil interactions, environment dynamics, and communications coverage. Building robust, usable data sets and visualisation/simulation tools is the best way for mission designers and simulators to make correct decisions for future missions. In the near term, it is the most useful way to begin building capabilities for small body exploration without needing to commit to specific mission architectures.
The OSIRIS-Rex Asteroid Sample Return: Mission Operations Design
NASA Technical Reports Server (NTRS)
Gal-Edd, Jonathan; Cheuvront, Allan
2014-01-01
The OSIRIS-REx mission employs a methodical, phased approach to ensure success in meeting the missions science requirements. OSIRIS-REx launches in September 2016, with a backup launch period occurring one year later. Sampling occurs in 2019. The departure burn from Bennu occurs in March 2021. On September 24, 2023, the SRC lands at the Utah Test and Training Range (UTTR). Stardust heritage procedures are followed to transport the SRC to Johnson Space Center, where the samples are removed and delivered to the OSIRIS-REx curation facility. After a six-month preliminary examination period the mission will produce a catalog of the returned sample, allowing the worldwide community to request samples for detailed analysis.Traveling and returning a sample from an Asteroid that has not been explored before requires unique operations consideration. The Design Reference Mission (DRM) ties together space craft, instrument and operations scenarios. The project implemented lessons learned from other small body missions: APLNEAR, JPLDAWN and ESARosetta. The key lesson learned was expected the unexpected and implement planning tools early in the lifecycle. In preparation to PDR, the project changed the asteroid arrival date, to arrive one year earlier and provided additional time margin. STK is used for Mission Design and STKScheduler for instrument coverage analysis.
Guillo, Christelle; Ferrance, Jerome P; Landers, James P
2006-04-28
Highly selective and sensitive assays are required for detection and quantitation of the small masses of DNA typically encountered in clinical and forensic settings. High detection sensitivity is achieved using fluorescent labeling dyes and detection techniques such as spectrofluorometers, microplate readers and cytometers. This work describes the use of a laser-induced fluorescence (LIF) detector in conjunction with a commercial capillary electrophoresis instrument for DNA quantitation. PicoGreen and YO-PRO-1, two fluorescent DNA labeling dyes, were used to assess the potential of the system for routine DNA analysis. Linearity, reproducibility, sensitivity, limits of detection and quantitation, and sample stability were examined for the two assays. The LIF detector response was found to be linear (R2 > 0.999) and reproducible (RSD < 9%) in both cases. The PicoGreen assay displayed lower limits of detection and quantitation (20 pg and 60 pg, respectively) than the YO-PRO-1 assay (60 pg and 260 pg, respectively). Although a small variation in fluorescence was observed for the DNA/dye complexes over time, quantitation was not significantly affected and the solutions were found to be relatively stable for 80 min. The advantages of the technique include a 4- to 40-fold reduction in the volume of sample required compared to traditional assays, a 2- to 20-fold reduction in the volume of reagents consumed, fast and automated analysis, and low cost (no specific instrumentation required).
Transient Infrared Emission Spectroscopy
NASA Astrophysics Data System (ADS)
Jones, Roger W.; McClelland, John F.
1989-12-01
Transient Infrared Emission Spectroscopy (TIRES) is a new technique that reduces the occurrence of self-absorption in optically thick solid samples so that analytically useful emission spectra may be observed. Conventional emission spectroscopy, in which the sample is held at an elevated, uniform temperature, is practical only for optically thin samples. In thick samples the emission from deep layers of the material is partially absorbed by overlying layers.1 This self-absorption results in emission spectra from most optically thick samples that closely resemble black-body spectra. The characteristic discrete emission bands are severely truncated and altered in shape. TIRES bypasses this difficulty by using a laser to heat only an optically thin surface layer. The increased temperature of the layer is transient since the layer will rapidly cool and thicken by thermal diffusion; hence the emission collection must be correlated with the laser heating. TIRES may be done with both pulsed and cw lasers.2,3 When a pulsed laser is used, the spectrometer sampling must be synchronized with the laser pulsing so that only emission during and immediately after each laser pulse is observed.3 If a cw laser is used, the sample must move rapidly through the beam. The hot, transient layer is then in the beam track on the sample at and immediately behind the beam position, so the spectrometer field of view must be limited to this region near the beam position.2 How much self-absorption the observed emission suffers depends on how thick the heated layer has grown by thermal diffusion when the spectrometer samples the emission. Use of a pulsed laser synchronized with the spectrometer sampling readily permits reduction of the time available for heat diffusion to about 100 acs .3 When a cw laser is used, the heat-diffusion time is controlled by how small the spectrometer field of view is and by how rapidly the sample moves past within this field. Both a very small field of view and a very high sample speed would be required to attain a diffusion time of 100 μs. Accordingly, pulsed-laser TIRES generally produces spectra suffering from less self-absorption than cw-laser TIRES does, but the cw-laser technique is technically much simpler since no synchronization is required.
NASA Astrophysics Data System (ADS)
Rudman, Reuben
1999-06-01
Wiley-VCH: New York, 1998. xxiv + 333 pp. ISBN 0-471-19458-1. $79.95. I would have subtitled this book "All You Ever Wanted To Know about ...Sample Preparation". Although its principal thrust is geared towards the analytical chemist in an X-ray diffraction (XRD) or X-ray fluorescence (XRF) service laboratory, this text will be of use primarily as a reference source in all milieus dealing with undergraduate research projects and advanced laboratory courses in physical and analytical chemistry. It contains dozens of suggestions for preparing randomly oriented small samples of nearly anything. For example, rocks and minerals, soft organics and hard ceramics, radioactive and liquid materials, metals and oils are all treated. As the availability of XRD and XRF equipment has increased, so has the use of these techniques in the teaching schedule. Many undergraduate laboratory and research projects utilizing these methods have been described in the literature and are found in laboratory textbooks. Very often, especially with the increasingly common use of automated computer-controlled instrumentation, sample preparation has become the key experimental technique required for successful data collection. However, it is not always easy to prepare the statistically random distribution of small particles (crystallites) that is required by these methods. A multitude of techniques have been developed over the past 70 years, but many of them have been handed down by word of mouth or are scattered throughout the literature. This book represents an attempt to systematically describe the theory and practice of sample preparation. This excellent guide to the intricacies of sample preparation begins with a description of statistical sampling methods and the principles of grinding techniques. After a discussion of XRF specimen preparation, which includes pressing pellets, fusion methods, crucible selection and handling very small samples, detailed descriptions for handling rocks, minerals, cements, metals, oils, and vegetation [sic] are given. The preparation of XRD samples is described for various diffraction equipment geometries (utilizing both counter and film detectors), including specific information regarding the use of flat specimens and slurries, the use of internal standards, and the effects of crystallite size on the diffraction pattern. Methods for handling ceramics, clays, zeolites, air-sensitive samples, thin films, and plastics are described, along with the special handling requirements for materials to be studied by high-pressure, high-temperature, or low-temperature techniques. One whole chapter is devoted to the equipment used in specimen preparation, including grinders, pulverizers, presses, specimen holders, repair of platinumware, and sources of all types of special equipment. Did you ever want to know where to get a Plattner steel mortar or a micronizing mill or soft-glass capillary tubes with 0.01-mm wall thickness? It's all here in this monograph. The book ends with a good glossary of terms, a general bibliography in addition to the extensive list of references following each of its 9 chapters, and an index. It will be of help in many areas of spectroscopy and analytical chemistry, as well as in XRD and XRF analyses.
Planetary Sample Caching System Design Options
NASA Technical Reports Server (NTRS)
Collins, Curtis; Younse, Paulo; Backes, Paul
2009-01-01
Potential Mars Sample Return missions would aspire to collect small core and regolith samples using a rover with a sample acquisition tool and sample caching system. Samples would need to be stored in individual sealed tubes in a canister that could be transfered to a Mars ascent vehicle and returned to Earth. A sample handling, encapsulation and containerization system (SHEC) has been developed as part of an integrated system for acquiring and storing core samples for application to future potential MSR and other potential sample return missions. Requirements and design options for the SHEC system were studied and a recommended design concept developed. Two families of solutions were explored: 1)transfer of a raw sample from the tool to the SHEC subsystem and 2)transfer of a tube containing the sample to the SHEC subsystem. The recommended design utilizes sample tool bit change out as the mechanism for transferring tubes to and samples in tubes from the tool. The SHEC subsystem design, called the Bit Changeout Caching(BiCC) design, is intended for operations on a MER class rover.
Coherent amplification of X-ray scattering from meso-structures
Lhermitte, Julien R.; Stein, Aaron; Tian, Cheng; ...
2017-07-10
Small-angle X-ray scattering (SAXS) often includes an unwanted background, which increases the required measurement time to resolve the sample structure. This is undesirable in all experiments, and may make measurement of dynamic or radiation-sensitive samples impossible. Here, we demonstrate a new technique, applicable when the scattering signal is background-dominated, which reduces the requisite exposure time. Our method consists of exploiting coherent interference between a sample with a designed strongly scattering `amplifier'. A modified angular correlation function is used to extract the symmetry of the interference term; that is, the scattering arising from the interference between the amplifier and the sample.more » This enables reconstruction of the sample's symmetry, despite the sample scattering itself being well below the intensity of background scattering. Thus, coherent amplification is used to generate a strong scattering term (well above background), from which sample scattering is inferred. We validate this method using lithographically defined test samples.« less
NASA Astrophysics Data System (ADS)
Rouillon, M.; Taylor, M. P.; Dong, C.
2016-12-01
This research assesses the advantages of integrating field portable X-ray Fluorescence (pXRF) technology for reducing the risk and increase confidence of decision making for metal-contaminated site assessments. Metal-contaminated sites are often highly heterogeneous and require a high sampling density to accurately characterize the distribution and concentration of contaminants. The current regulatory assessment approaches rely on a small number of samples processed using standard wet-chemistry methods. In New South Wales (NSW), Australia, the current notification trigger for characterizing metal-contaminated sites require the upper 95% confidence interval of the site mean to equal or exceed the relevant guidelines. The method's low `minimum' sampling requirements can misclassify sites due to the heterogeneous nature of soil contamination, leading to inaccurate decision making. To address this issue, we propose integrating infield pXRF analysis with the established sampling method to overcome sampling limitations. This approach increases the minimum sampling resolution and reduces the 95% CI of the site mean. Infield pXRF analysis at contamination hotspots enhances sample resolution efficiently and without the need to return to the site. In this study, the current and proposed pXRF site assessment methods are compared at five heterogeneous metal-contaminated sites by analysing the spatial distribution of contaminants, 95% confidence intervals of site means, and the sampling and analysis uncertainty associated with each method. Finally, an analysis of costs associated with both the current and proposed methods is presented to demonstrate the advantages of incorporating pXRF into metal-contaminated site assessments. The data shows that pXRF integrated site assessments allows for faster, cost-efficient, characterisation of metal-contaminated sites with greater confidence for decision making.
NASA Technical Reports Server (NTRS)
Allton, J. H.; Zeigler, R. A.; Calaway, M. J.
2016-01-01
The Lunar Receiving Laboratory (LRL) was planned and constructed in the 1960s to support the Apollo program in the context of landing on the Moon and safely returning humans. The enduring science return from that effort is a result of careful curation of planetary materials. Technical decisions for the first facility included sample handling environment (vacuum vs inert gas), and instruments for making basic sample assessment, but the most difficult decision, and most visible, was stringent biosafety vs ultra-clean sample handling. Biosafety required handling of samples in negative pressure gloveboxes and rooms for containment and use of sterilizing protocols and animal/plant models for hazard assessment. Ultra-clean sample handling worked best in positive pressure nitrogen environment gloveboxes in positive pressure rooms, using cleanable tools of tightly controlled composition. The requirements for these two objectives were so different, that the solution was to design and build a new facility for specific purpose of preserving the scientific integrity of the samples. The resulting Lunar Curatorial Facility was designed and constructed, from 1972-1979, with advice and oversight by a very active committee comprised of lunar sample scientists. The high precision analyses required for planetary science are enabled by stringent contamination control of trace elements in the materials and protocols of construction (e.g., trace element screening for paint and flooring materials) and the equipment used in sample handling and storage. As other astromaterials, especially small particles and atoms, were added to the collections curated, the technical tension between particulate cleanliness and organic cleanliness was addressed in more detail. Techniques for minimizing particulate contamination in sample handling environments use high efficiency air filtering techniques typically requiring organic sealants which offgas. Protocols for reducing adventitious carbon on sample handling surfaces often generate particles. Further work is needed to achieve both minimal particulate and adventitious carbon contamination. This paper will discuss these facility topics and others in the historical context of nearly 50 years' curation experience for lunar rocks and regolith, meteorites, cosmic dust, comet particles, solar wind atoms, and asteroid particles at Johnson Space Center.
Space-Time Smoothing of Complex Survey Data: Small Area Estimation for Child Mortality.
Mercer, Laina D; Wakefield, Jon; Pantazis, Athena; Lutambi, Angelina M; Masanja, Honorati; Clark, Samuel
2015-12-01
Many people living in low and middle-income countries are not covered by civil registration and vital statistics systems. Consequently, a wide variety of other types of data including many household sample surveys are used to estimate health and population indicators. In this paper we combine data from sample surveys and demographic surveillance systems to produce small area estimates of child mortality through time. Small area estimates are necessary to understand geographical heterogeneity in health indicators when full-coverage vital statistics are not available. For this endeavor spatio-temporal smoothing is beneficial to alleviate problems of data sparsity. The use of conventional hierarchical models requires careful thought since the survey weights may need to be considered to alleviate bias due to non-random sampling and non-response. The application that motivated this work is estimation of child mortality rates in five-year time intervals in regions of Tanzania. Data come from Demographic and Health Surveys conducted over the period 1991-2010 and two demographic surveillance system sites. We derive a variance estimator of under five years child mortality that accounts for the complex survey weighting. For our application, the hierarchical models we consider include random effects for area, time and survey and we compare models using a variety of measures including the conditional predictive ordinate (CPO). The method we propose is implemented via the fast and accurate integrated nested Laplace approximation (INLA).
Massilia sp. BS-1, a novel violacein-producing bacterium isolated from soil.
Agematu, Hitosi; Suzuki, Kazuya; Tsuya, Hiroaki
2011-01-01
A novel bacterium, Massilia sp. BS-1, producing violacein and deoxyviolacein was isolated from a soil sample collected from Akita Prefecture, Japan. The 16S ribosomal DNA of strain BS-1 displayed 93% homology with its nearest violacein-producing neighbor, Janthinobacterium lividum. Strain BS-1 grew well in a synthetic medium, but required both L-tryptophan and a small amount of L-histidine to produce violacein.
ERIC Educational Resources Information Center
Charman, Melody
2014-01-01
This small-scale pilot study aimed to establish how the mode of response in an examination affects candidates' performances on items that require an extended answer. The sample comprised 46 17-year-old students from two classes (one in a state secondary school and one in a state sixth-form college), who sat a mock A-level English Literature…
Development of an instrument for time activity curve measurements during PET imaging of rodents
NASA Astrophysics Data System (ADS)
Reymond, Jean-Marc; Guez, David; Kerhoas, Sophie; Mangeot, Philippe; Boisgard, Raphaël; Jan, Sébastien; Tavitian, Bertrand; Trebossen, Régine
2007-02-01
Molecular imaging using PET in small rodents requires commonly the knowledge of the input function of the tracer (quantitative and kinetic studies of the metabolism, development of new drugs or new tracers, etc.). In this paper, we report the status and the performances of the prototype of a counting system that is under development at DAPNIA a in collaboration with SHFJ b. The detection device is made of silicon diodes of 0.3 mm thickness proper to measure the positrons emitted by the radiotracer contained in arterial blood flowing in a thin-wall microtube. Such diodes are poorly efficient for the 511 keV gammas from the rodent and thus require a rather light lead shielding and allow operating very close by to the animal. The detectors, the front-end electronics (for signal preamplification, shaping, and discrimination) and the acquisition circuits are mounted on a single card. The device is connected directly to a portable computer via an USB port. Such a design provides a compact, rugged and portable device for working close to a small animal PET camera. Preliminary results show the performances of this counting system with 18F solution and a time-activity curve for FDG blood samples (with ∣˜30 μL/samples) from a rat.
Image Stability Requirements For a Geostationary Imaging Fourier Transform Spectrometer (GIFTS)
NASA Technical Reports Server (NTRS)
Bingham, G. E.; Cantwell, G.; Robinson, R. C.; Revercomb, H. E.; Smith, W. L.
2001-01-01
A Geostationary Imaging Fourier Transform Spectrometer (GIFTS) has been selected for the NASA New Millennium Program (NMP) Earth Observing-3 (EO-3) mission. Our paper will discuss one of the key GIFTS measurement requirements, Field of View (FOV) stability, and its impact on required system performance. The GIFTS NMP mission is designed to demonstrate new and emerging sensor and data processing technologies with the goal of making revolutionary improvements in meteorological observational capability and forecasting accuracy. The GIFTS payload is a versatile imaging FTS with programmable spectral resolution and spatial scene selection that allows radiometric accuracy and atmospheric sounding precision to be traded in near real time for area coverage. The GIFTS sensor combines high sensitivity with a massively parallel spatial data collection scheme to allow high spatial resolution measurement of the Earth's atmosphere and rapid broad area coverage. An objective of the GIFTS mission is to demonstrate the advantages of high spatial resolution (4 km ground sample distance - gsd) on temperature and water vapor retrieval by allowing sampling in broken cloud regions. This small gsd, combined with the relatively long scan time required (approximately 10 s) to collect high resolution spectra from geostationary (GEO) orbit, may require extremely good pointing control. This paper discusses the analysis of this requirement.
Improved argument-FFT frequency offset estimation for QPSK coherent optical Systems
NASA Astrophysics Data System (ADS)
Han, Jilong; Li, Wei; Yuan, Zhilin; Li, Haitao; Huang, Liyan; Hu, Qianggao
2016-02-01
A frequency offset estimation (FOE) algorithm based on fast Fourier transform (FFT) of the signal's argument is investigated, which does not require removing the modulated data phase. In this paper, we analyze the flaw of the argument-FFT algorithm and propose a combined FOE algorithm, in which the absolute of frequency offset (FO) is accurately calculated by argument-FFT algorithm with a relatively large number of samples and the sign of FO is determined by FFT-based interpolation discrete Fourier transformation (DFT) algorithm with a relatively small number of samples. Compared with the previous algorithms based on argument-FFT, the proposed one has low complexity and can still effectively work with a relatively less number of samples.
Vaeth, Michael; Skovlund, Eva
2004-06-15
For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.
Kumar, B; Han, L-F; Wassenaar, L I; Klaus, P M; Kainz, G G; Hillegonds, D; Brummer, D; Ahmad, M; Belachew, D L; Araguás, L; Aggarwal, P
2016-12-01
Tritium ( 3 H) in natural waters is a powerful tracer of hydrological processes, but its low concentrations require electrolytic enrichment before precise measurements can be made with a liquid scintillation counter. Here, we describe a newly developed, compact tritium enrichment unit which can be used to enrich up to 2L of a water sample. This allows a high enrichment factor (>100) for measuring low 3 H contents of <0.05TU. The TEU uses a small cell (250mL) with automated re-filling and a CO 2 bubbling technique to neutralize the high alkalinity of enriched samples. The enriched residual sample is retrieved from the cell under vacuum by cryogenic distillation at -20°C and the tritium enrichment factor for each sample is accurately determined by measuring pre- and post- enrichment 2 H concentrations with laser spectrometry. Copyright © 2016. Published by Elsevier Ltd.
International Agreement on Planetary Protection
NASA Technical Reports Server (NTRS)
2000-01-01
The maintenance of a NASA policy, is consistent with international agreements. The planetary protection policy management in OSS, with Field Center support. The advice from internal and external advisory groups (NRC, NAC/Planetary Protection Task Force). The technology research and standards development in bioload characterization. The technology research and development in bioload reduction/sterilization. This presentation focuses on: forward contamination - research on the potential for Earth life to exist on other bodies, improved strategies for planetary navigation and collision avoidance, and improved procedures for sterile spacecraft assembly, cleaning and/or sterilization; and backward contamination - development of sample transfer and container sealing technologies for Earth return, improvement in sample return landing target assessment and navigation strategy, planning for sample hazard determination requirements and procedures, safety certification, (liaison to NEO Program Office for compositional data on small bodies), facility planning for sample recovery system, quarantine, and long-term curation of 4 returned samples.
Drone Transport of Chemistry and Hematology Samples Over Long Distances.
Amukele, Timothy K; Hernandez, James; Snozek, Christine L H; Wyatt, Ryan G; Douglas, Matthew; Amini, Richard; Street, Jeff
2017-11-02
We addressed the stability of biological samples in prolonged drone flights by obtaining paired chemistry and hematology samples from 21 adult volunteers in a single phlebotomy event-84 samples total. Half of the samples were held stationary, while the other samples were flown for 3 hours (258 km) in a custom active cooling box mounted on the drone. After the flight, 19 chemistry and hematology tests were performed. Seventeen analytes had small or no bias, but glucose and potassium in flown samples showed an 8% and 6.2% bias, respectively. The flown samples (mean, 24.8°C) were a mean of 2.5°C cooler than the stationary samples (mean, 27.3°C) during transportation to the flight field as well as during the flight. The changes in glucose and potassium are consistent with the magnitude and duration of the temperature difference between the flown and stationary samples. Long drone flights of biological samples are feasible but require stringent environmental controls to ensure consistent results. © American Society for Clinical Pathology, 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
NASA Technical Reports Server (NTRS)
Payne, R. W. (Principal Investigator)
1981-01-01
The crop identification procedures used performed were for spring small grains and are conducive to automation. The performance of the machine processing techniques shows a significant improvement over previously evaluated technology; however, the crop calendars require additional development and refinements prior to integration into automated area estimation technology. The integrated technology is capable of producing accurate and consistent spring small grains proportion estimates. Barley proportion estimation technology was not satisfactorily evaluated because LANDSAT sample segment data was not available for high density barley of primary importance in foreign regions and the low density segments examined were not judged to give indicative or unequvocal results. Generally, the spring small grains technology is ready for evaluation in a pilot experiment focusing on sensitivity analysis to a variety of agricultural and meteorological conditions representative of the global environment.
NASA Technical Reports Server (NTRS)
Fries, M. D.; Fries, W. D.; McCubbin, F. M.; Zeigler, R. A.
2018-01-01
Mars Sample Return (MSR) requires strict organic contamination control (CC) and contamination knowledge (CK) as outlined by the Mars 2020 Organic Contamination Panel (OCP). This includes a need to monitor surficial organic contamination to a ng/sq. cm sensitivity level. Archiving and maintaining this degree of surface cleanliness may be difficult but has been achieved. MSR's CK effort will be very important because all returned samples will be studied thoroughly and in minute detail. Consequently, accurate CK must be collected and characterized to best interpret scientific results from the returned samples. The CK data are not only required to make accurate measurements and interpretations for carbon-depleted martian samples, but also to strengthen the validity of science investigations performed on the samples. The Opera instrument prototype is intended to fulfill a CC/CK role in the assembly, cleaning, and overall contamination history of hardware used in the MSR effort, from initial hardware assembly through post-flight sample curation. Opera is intended to monitor particulate and organic contamination using quartz crystal microbalances (QCMs), in a self-contained portable package that is cleanroom-compliant. The Opera prototype is in initial development capable of approximately 100 ng/sq. cm organic contamination sensitivity, with additional development planned to achieve 1 ng/sq. cm. The Opera prototype was funded by the 2017 NASA Johnson Space Center Innovation Charge Account (ICA), which provides funding for small, short-term projects.
Competitive intelligence information management and innovation in small technology-based companies
NASA Astrophysics Data System (ADS)
Tanev, Stoyan
2007-05-01
In this article we examine how (i) company type and (ii) the competitive intelligence information used by small technology-based companies affect their innovation performance. The focus is on the specific information types used and not on the information sources. Information topics are classified in four groups - customers (10), company (9), competitor (11) and industry (12). The sample consists of 45 small new technology-based companies, specialized suppliers, and service companies from a variety of sectors - software, photonics, telecommunications, biomedical engineering and biotech, traditional manufacturing etc. The results suggest that the total number of intelligence information topics companies use to make decisions about innovation is not associated with the number of their new products, processes, services and patents. Therefore the companies in our sample do not seem to have the resources, processes or value systems required to use different competitive intelligence information when making decisions on innovation or may rely more on their own internal logic than on external information. Companies are classified using a Pavitt-like taxonomy. Service companies are considered as a separate company type. This allows for explicitly studying both, the innovative role of new services in product driven companies, and the role of new product development in service companies.
Probing the Small-scale Structure in Strongly Lensed Systems via Transdimensional Inference
NASA Astrophysics Data System (ADS)
Daylan, Tansu; Cyr-Racine, Francis-Yan; Diaz Rivero, Ana; Dvorkin, Cora; Finkbeiner, Douglas P.
2018-02-01
Strong lensing is a sensitive probe of the small-scale density fluctuations in the Universe. We implement a pipeline to model strongly lensed systems using probabilistic cataloging, which is a transdimensional, hierarchical, and Bayesian framework to sample from a metamodel (union of models with different dimensionality) consistent with observed photon count maps. Probabilistic cataloging allows one to robustly characterize modeling covariances within and across lens models with different numbers of subhalos. Unlike traditional cataloging of subhalos, it does not require model subhalos to improve the goodness of fit above the detection threshold. Instead, it allows the exploitation of all information contained in the photon count maps—for instance, when constraining the subhalo mass function. We further show that, by not including these small subhalos in the lens model, fixed-dimensional inference methods can significantly mismodel the data. Using a simulated Hubble Space Telescope data set, we show that the subhalo mass function can be probed even when many subhalos in the sample catalogs are individually below the detection threshold and would be absent in a traditional catalog. The implemented software, Probabilistic Cataloger (PCAT) is made publicly available at https://github.com/tdaylan/pcat.
Approximation of Failure Probability Using Conditional Sampling
NASA Technical Reports Server (NTRS)
Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.
2008-01-01
In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.
48 CFR 19.702 - Statutory requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... business, HUBZone small business, small disadvantaged business, and women-owned small business concerns..., HUBZone small business, small disadvantaged business, and women-owned small business concerns. (a) Except... PROGRAMS SMALL BUSINESS PROGRAMS The Small Business Subcontracting Program 19.702 Statutory requirements...
Maes, Jan; Verlooy, Lien; Buenafe, Olivia E.; de Witte, Peter A. M.; Esguerra, Camila V.; Crawford, Alexander D.
2012-01-01
Zebrafish are rapidly growing in popularity as an in vivo model system for chemical genetics, drug discovery, and toxicology, and more recently also for natural product discovery. Experiments involving the pharmacological evaluation of small molecules or natural product extracts in zebrafish bioassays require the effective delivery of these compounds to embryos and larvae. While most samples to be screened are first solubilized in dimethyl sulfoxide (DMSO), which is then diluted in the embryo medium, often this method is not sufficient to prevent the immediate or eventual precipitation of the sample. Certain compounds and extracts are also not highly soluble in DMSO. In such instances the use of carriers and/or other solvents might offer an alternative means to achieve the required sample concentration. Towards this end, we determined the maximum tolerated concentration (MTC) of several commonly used solvents and carriers in zebrafish embryos and larvae at various developmental stages. Solvents evaluated for this study included acetone, acetonitrile, butanone, dimethyl formamide, DMSO, ethanol, glycerol, isopropanol, methanol, polyethylene glycol (PEG-400), propylene glycol, and solketal, and carriers included albumin (BSA) and cyclodextrin (2-hydroxypropyl-beta-cyclodextrin, or HPBCD). This study resulted in the identification of polyethylene glycol (PEG400), propylene glycol, and methanol as solvents that were relatively well-tolerated over a range of developmental stages. In addition, our results showed that acetone was well-tolerated by embryos but not by larvae, and 1% cyclodextrin (HPBCD) was well-tolerated by both embryos and larvae, indicating the utility of this carrier for compound screening in zebrafish. However, given the relatively small differences (2–3 fold) between concentrations that are apparently safe and those that are clearly toxic, further studies – e.g. omics analyses –should be carried out to determine which cellular processes and signalling pathways are affected by any solvents and carriers that are used for small-molecule screens in zebrafish. PMID:23082109
Recognition Using Hybrid Classifiers.
Osadchy, Margarita; Keren, Daniel; Raviv, Dolev
2016-04-01
A canonical problem in computer vision is category recognition (e.g., find all instances of human faces, cars etc., in an image). Typically, the input for training a binary classifier is a relatively small sample of positive examples, and a huge sample of negative examples, which can be very diverse, consisting of images from a large number of categories. The difficulty of the problem sharply increases with the dimension and size of the negative example set. We propose to alleviate this problem by applying a "hybrid" classifier, which replaces the negative samples by a prior, and then finds a hyperplane which separates the positive samples from this prior. The method is extended to kernel space and to an ensemble-based approach. The resulting binary classifiers achieve an identical or better classification rate than SVM, while requiring far smaller memory and lower computational complexity to train and apply.
NASA Astrophysics Data System (ADS)
Marcó P., L. M.; Jiménez, E.; Hernández C., E. A.; Rojas, A.; Greaves, E. D.
2001-11-01
The method of quantification using the Compton peak as an internal standard, developed in a previous work, was applied to the routine determination of Fe, Cu, Zn and Se in serum samples from normal individuals and cancer patients by total reflection X-ray fluorescence spectrometry. Samples were classified according to age and sex of the donor, in order to determine reference values for normal individuals. Results indicate that the Zn/Cu ratio and the Cu concentration could prove to be useful tools for cancer diagnosis. Significant differences in these parameters between the normal and cancer group were found for all age ranges. The multielemental character of the technique, coupled with the small amounts of sample required and the short analysis time make it a valuable tool in clinical analysis.
NASA Astrophysics Data System (ADS)
Takayama, T.; Iwasaki, A.
2016-06-01
Above-ground biomass prediction of tropical rain forest using remote sensing data is of paramount importance to continuous large-area forest monitoring. Hyperspectral data can provide rich spectral information for the biomass prediction; however, the prediction accuracy is affected by a small-sample-size problem, which widely exists as overfitting in using high dimensional data where the number of training samples is smaller than the dimensionality of the samples due to limitation of require time, cost, and human resources for field surveys. A common approach to addressing this problem is reducing the dimensionality of dataset. Also, acquired hyperspectral data usually have low signal-to-noise ratio due to a narrow bandwidth and local or global shifts of peaks due to instrumental instability or small differences in considering practical measurement conditions. In this work, we propose a methodology based on fused lasso regression that select optimal bands for the biomass prediction model with encouraging sparsity and grouping, which solves the small-sample-size problem by the dimensionality reduction from the sparsity and the noise and peak shift problem by the grouping. The prediction model provided higher accuracy with root-mean-square error (RMSE) of 66.16 t/ha in the cross-validation than other methods; multiple linear analysis, partial least squares regression, and lasso regression. Furthermore, fusion of spectral and spatial information derived from texture index increased the prediction accuracy with RMSE of 62.62 t/ha. This analysis proves efficiency of fused lasso and image texture in biomass estimation of tropical forests.
Deans, Zandra C; Costa, Jose Luis; Cree, Ian; Dequeker, Els; Edsjö, Anders; Henderson, Shirley; Hummel, Michael; Ligtenberg, Marjolijn Jl; Loddo, Marco; Machado, Jose Carlos; Marchetti, Antonio; Marquis, Katherine; Mason, Joanne; Normanno, Nicola; Rouleau, Etienne; Schuuring, Ed; Snelson, Keeda-Marie; Thunnissen, Erik; Tops, Bastiaan; Williams, Gareth; van Krieken, Han; Hall, Jacqueline A
2017-01-01
The clinical demand for mutation detection within multiple genes from a single tumour sample requires molecular diagnostic laboratories to develop rapid, high-throughput, highly sensitive, accurate and parallel testing within tight budget constraints. To meet this demand, many laboratories employ next-generation sequencing (NGS) based on small amplicons. Building on existing publications and general guidance for the clinical use of NGS and learnings from germline testing, the following guidelines establish consensus standards for somatic diagnostic testing, specifically for identifying and reporting mutations in solid tumours. These guidelines cover the testing strategy, implementation of testing within clinical service, sample requirements, data analysis and reporting of results. In conjunction with appropriate staff training and international standards for laboratory testing, these consensus standards for the use of NGS in molecular pathology of solid tumours will assist laboratories in implementing NGS in clinical services.
Emerging technologies in medical applications of minimum volume vitrification
Zhang, Xiaohui; Catalano, Paolo N; Gurkan, Umut Atakan; Khimji, Imran; Demirci, Utkan
2011-01-01
Cell/tissue biopreservation has broad public health and socio-economic impact affecting millions of lives. Cryopreservation technologies provide an efficient way to preserve cells and tissues targeting the clinic for applications including reproductive medicine and organ transplantation. Among these technologies, vitrification has displayed significant improvement in post-thaw cell viability and function by eliminating harmful effects of ice crystal formation compared to the traditional slow freezing methods. However, high cryoprotectant agent concentrations are required, which induces toxicity and osmotic stress to cells and tissues. It has been shown that vitrification using small sample volumes (i.e., <1 μl) significantly increases cooling rates and hence reduces the required cryoprotectant agent levels. Recently, emerging nano- and micro-scale technologies have shown potential to manipulate picoliter to nanoliter sample sizes. Therefore, the synergistic integration of nanoscale technologies with cryogenics has the potential to improve biopreservation methods. PMID:21955080
Cheow, Lih Feng; Viswanathan, Ramya; Chin, Chee-Sing; Jennifer, Nancy; Jones, Robert C; Guccione, Ernesto; Quake, Stephen R; Burkholder, William F
2014-10-07
Homogeneous assay platforms for measuring protein-ligand interactions are highly valued due to their potential for high-throughput screening. However, the implementation of these multiplexed assays in conventional microplate formats is considerably expensive due to the large amounts of reagents required and the need for automation. We implemented a homogeneous fluorescence anisotropy-based binding assay in an automated microfluidic chip to simultaneously interrogate >2300 pairwise interactions. We demonstrated the utility of this platform in determining the binding affinities between chromatin-regulatory proteins and different post-translationally modified histone peptides. The microfluidic chip assay produces comparable results to conventional microtiter plate assays, yet requires 2 orders of magnitude less sample and an order of magnitude fewer pipetting steps. This approach enables one to use small samples for medium-scale screening and could ease the bottleneck of large-scale protein purification.
Use of COTS Batteries on ISS and Shuttle
NASA Technical Reports Server (NTRS)
Jeevarajan, Judith A.
2004-01-01
This presentation focuses on COTS Battery testing for energy content, toxicity, hazards, failures modes and controls for different battery chemistries. It also discusses the current program requirements, challenges with COTS Batteries in manned vehicle COTS methodology, JSC test details, and gives a list of incidents from consumer protection safety commissions. The Battery test process involved testing new batteries for engineering certification, qualification of batteries, flight acceptance, cell and battery, environment, performance and abuse. Their conclusions and recommendations were that: high risk is undertaken with the use of COTS batteries, hazard control verification is required to allow the use of these batteries on manned space flights, failures during use cannot be understood if different scenarios of failure are not tested on the ground, and that testing is performed on small sample numbers due to restrictions on cost and time. They recommend testing of large sample size to gain more confidence in the operation of the hazard controls.
Morgan, Sue
2009-07-01
One of the well-known needs of teenagers and young adults with cancer (TYAs) is their quest for knowledge about their illness and its treatment. As well as this they desire to be treated as partners and as an integral part of the team who care for them. This small exploratory study aims to address these issues in the development of a unique project where the young people were given unprecedented access to their own cancer samples and to the professional experts who analyse those samples. This small cohort of 31 TYAs, aged 13-24 years, were invited to look at their samples under a microscope and discuss it with the experts; of these 4 declined. A number of questionnaires, containing both open and closed questions, were given before, immediately post and one month after the viewing session. The results were analysed using SPSS13 and content analysis. The study was able to establish that viewing cancer samples had no measurable effect on the TYAs gaining of new knowledge, sense of control or choices that they made. However, it did show how the young people felt in light of actually 'seeing' their own cancer, and how this seemed to improve the overall perception of their disease. This is a new body of knowledge that requires further, more detailed, rigorous work.
Laboratory-based x-ray phase-contrast tomography enables 3D virtual histology
NASA Astrophysics Data System (ADS)
Töpperwien, Mareike; Krenkel, Martin; Quade, Felix; Salditt, Tim
2016-09-01
Due to the large penetration depth and small wavelength hard x-rays offer a unique potential for 3D biomedical and biological imaging, combining capabilities of high resolution and large sample volume. However, in classical absorption-based computed tomography, soft tissue only shows a weak contrast, limiting the actual resolution. With the advent of phase-contrast methods, the much stronger phase shift induced by the sample can now be exploited. For high resolution, free space propagation behind the sample is particularly well suited to make the phase shift visible. Contrast formation is based on the self-interference of the transmitted beam, resulting in object-induced intensity modulations in the detector plane. As this method requires a sufficiently high degree of spatial coherence, it was since long perceived as a synchrotron-based imaging technique. In this contribution we show that by combination of high brightness liquid-metal jet microfocus sources and suitable sample preparation techniques, as well as optimized geometry, detection and phase retrieval, excellent three-dimensional image quality can be obtained, revealing the anatomy of a cobweb spider in high detail. This opens up new opportunities for 3D virtual histology of small organisms. Importantly, the image quality is finally augmented to a level accessible to automatic 3D segmentation.
A simple on-line arterial time-activity curve detector for [O-15] water PET studies
NASA Astrophysics Data System (ADS)
Wollenweber, S. D.; Hichwa, R. D.; Ponto, L. L. B.
1997-08-01
A simple, automated on-line detector system has been fabricated and implemented to detect the arterial time-activity curve (TAG) for bolus-injection [O-15] water PET studies. This system offers two significant improvements over existing systems: a pump mechanism is not required to control arterial blood flow through the detector and dispersion correction of the time-activity curve for dispersion in external tubing is unnecessary. The [O-15] positrons emanating from blood within a thin-walled, 0.134 cm inner-diameter plastic tube are detected by a 0.5 cm wide by 1.0 cm long by 0.1 cm thick plastic scintillator mounted to a miniature PMT. Photon background is reduced to insignificant levels by a 2.0 cm thick cylindrical lead shield. Mean cerebral blood flow (mCBF) determined from an autoradiographic model and from the TAC measured by 1-second automated sampling was compared to that calculated from a TAC acquired using 5-second integrated manual samples. Improvements in timing resolution (1-sec vs. 5-sec) cause small but significant differences between the two sampling methods. Dispersion is minimized due to small tubing diameters, short lengths of tubing between the radial arterial sampling site and the detector and the presence of a 3-way valve 10 cm proximal to the detector.
NASA Astrophysics Data System (ADS)
Judák, Péter; Grainger, Janelle; Goebel, Catrin; Van Eenoo, Peter; Deventer, Koen
2017-08-01
The mobile phase additive (DMSO) has been described as a useful tool to enhance electrospray ionization (ESI) of peptides and proteins. So far, this technique has mainly been used in proteomic/peptide research, and its applicability in a routine clinical laboratory setting (i.e., doping control analysis) has not been described yet. This work provides a simple, easy to implement screening method for the detection of doping relevant small peptides (GHRPs, GnRHs, GHS, and vasopressin-analogues) with molecular weight less than 2 kDa applying DMSO in the mobile phase. The gain in sensitivity was sufficient to inject the urine samples after a 2-fold dilution step omitting a time consuming sample preparation. The employed analytical procedure was validated for the qualitative determination of 36 compounds, including 13 metabolites. The detection limits (LODs) ranged between 50 and 1000 pg/mL and were compliant with the 2 ng/mL minimum detection level required by the World Anti-Doping Agency (WADA) for all the target peptides. To demonstrate the feasibility of the work, urine samples obtained from patients who have been treated with desmopressin or leuprolide and urine samples that have been declared as adverse analytical findings were analyzed.
NASA Astrophysics Data System (ADS)
Qiang, Wei
2011-12-01
We describe a sampling scheme for the two-dimensional (2D) solid state NMR experiments, which can be readily applied to the sensitivity-limited samples. The sampling scheme utilizes continuous, non-uniform sampling profile for the indirect dimension, i.e. the acquisition number decreases as a function of the evolution time ( t1) in the indirect dimension. For a beta amyloid (Aβ) fibril sample, we observed overall 40-50% signal enhancement by measuring the cross peak volume, while the cross peak linewidths remained comparable to the linewidths obtained by regular sampling and processing strategies. Both the linear and Gaussian decay functions for the acquisition numbers result in similar percentage of increment in signal. In addition, we demonstrated that this sampling approach can be applied with different dipolar recoupling approaches such as radiofrequency assisted diffusion (RAD) and finite-pulse radio-frequency-driven recoupling (fpRFDR). This sampling scheme is especially suitable for the sensitivity-limited samples which require long signal averaging for each t1 point, for instance the biological membrane proteins where only a small fraction of the sample is isotopically labeled.
NASA Technical Reports Server (NTRS)
Hejduk, M. D.; Cowardin, H. M.; Stansbery, Eugene G.
2012-01-01
In performing debris surveys of deep-space orbital regions, the considerable volume of the area to be surveyed and the increased orbital altitude suggest optical telescopes as the most efficient survey instruments; but to proceed this way, methodologies for debris object size estimation using only optical tracking and photometric information are needed. Basic photometry theory indicates that size estimation should be possible if satellite albedo and shape are known. One method for estimating albedo is to try to determine the object's material type photometrically, as one can determine the albedos of common satellite materials in the laboratory. Examination of laboratory filter photometry (using Johnson BVRI filters) on a set of satellite material samples indicates that most material types can be separated at the 1-sigma level via B-R versus R-I color differences with a relatively small amount of required resampling, and objects that remain ambiguous can be resolved by B-R versus B-V color differences and solar radiation pressure differences. To estimate shape, a technique advanced by Hall et al. [1], based on phase-brightness density curves and not requiring any a priori knowledge of attitude, has been modified slightly to try to make it more resistant to the specular characteristics of different materials and to reduce the number of samples necessary to make robust shape determinations. Working from a gallery of idealized debris shapes, the modified technique identifies most shapes within this gallery correctly, also with a relatively small amount of resampling. These results are, of course, based on relatively small laboratory investigations and simulated data, and expanded laboratory experimentation and further investigation with in situ survey measurements will be required in order to assess their actual efficacy under survey conditions; but these techniques show sufficient promise to justify this next level of analysis.
Tailoring Laser Propulsion for Future Applications in Space
NASA Astrophysics Data System (ADS)
Eckel, Hans-Albert; Scharring, Stefan
2010-10-01
Pulsed laser propulsion may turn out as a low cost alternative for the transportation of small payloads in future. In recent years DLR investigated this technology with the goal of cheaply launching small satellites into low earth orbit (LEO) with payload masses on the order of 5 to 10 kg. Since the required high power pulsed laser sources are yet not at the horizon, DLR focused on new applications based on available laser technology. Space-borne, i.e. in weightlessness, there exist a wide range of missions requiring small thrusters that can be propelled by laser power. This covers space logistic and sample return missions as well as position keeping and attitude control of satellites. First, a report on the proof of concept of a remote controlled laser rocket with a thrust vector steering device integrated in a parabolic nozzle will be given. Second, the road from the previous ground-based flight experiments in earth's gravity using a 100-J class laser to flight experiments with a parabolic thruster in an artificial 2D-zero gravity on an air cushion table employing a 1-J class laser and, with even less energy, new investigations in the field of laser micro propulsion will be reviewed.
Transparency and oversight in local wellness policies.
Chriqui, Jamie F; Chaloupka, Frank J
2011-02-01
Advocates have called for increased wellness policy transparency and oversight through the use of health advisory councils. This study examines (1) wellness policy transparency, (2) advisory council requirements, (3) factors associated with each, and (4) whether transparency or advisory council requirements are indicative of a stronger policy addressing nutrition and physical activity. Policies for school year 2007-2008 were obtained from a nationally representative sample of 641 districts and analyzed for their applicability to elementary, middle, and high school levels. Main outcome measures included (1) policy transparency (online availability), (2) advisory council requirements, and (3) overall policy strength. T-tests assessed variability in policy strength by transparency and advisory council requirements. Multivariate logistic and linear regression analyses controlled for district size, socioeconomic status, race/ethnicity, region, and locale; models of advisory council/policy strength relationships also controlled for state advisory council requirements. More than 41% of districts posted wellness policies online and more than 43% required advisory councils. Transparency was less likely in small-/medium-sized and non-southern districts; and, for elementary school policies, most common in majority Hispanic districts. Advisory council requirements were less likely in small-/medium-sized districts for middle/high school policies and more likely in majority Hispanic districts for elementary school policies. After adjusting for all covariates, transparency was not associated with policy strength, but advisory council requirements significantly predicted policy strength. Transparency may facilitate awareness, but it does not mean that wellness policies will be stronger; however, advisory council requirements may be a marker for stronger policies. © 2011, American School Health Association.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W; Khawaja, M. Sami; Rushton, Josh
Evaluating an energy efficiency program requires assessing the total energy and demand saved through all of the energy efficiency measures provided by the program. For large programs, the direct assessment of savings for each participant would be cost-prohibitive. Even if a program is small enough that a full census could be managed, such an undertaking would almost always be an inefficient use of evaluation resources. The bulk of this chapter describes methods for minimizing and quantifying sampling error. Measurement error and regression error are discussed in various contexts in other chapters.
Development of Skylab medical equipment and flight preparations
NASA Technical Reports Server (NTRS)
Johnston, R. S.; Stonesifer, J. C.; Hawkins, W. R.
1975-01-01
The major medical systems in the Skylab orbital workshop are described. They comprise the food system, the waste management system, operational bioinstrumentation, personal hygiene, gas sampling, an inflight medical support system, and a cardiovascular counterpressure garment. Life sciences experiments carried out aboard Skylab are also reviewed; these include an ergometer and metabolic analyzer, a lower-body negative pressure device, an electrode harness and body temperature probe, a blood pressure cuff, a leg volume measuring band, sleep studies, a body-mass measuring device, a rotating litter chair, a blood sample processor, and small-mass measuring apparatus. All performance requirements were met with the equipment, and no failures were encountered.
Micro-crystallography comes of age
Smith, Janet L.; Fischetti, Robert F.; Yamamoto, Masaki
2012-01-01
The latest revolution in macromolecular crystallography was incited by the development of dedicated, user friendly, micro-crystallography beamlines. Brilliant X-ray beams of diameter 20 microns or less, now available at most synchrotron sources, enable structure determination from samples that previously were inaccessible. Relative to traditional crystallography, crystals with one or more small dimensions have diffraction patterns with vastly improved signal-to-noise when recorded with an appropriately matched beam size. Structures can be solved from isolated, well diffracting regions within inhomogeneous samples. This review summarizes the technological requirements and approaches to producing micro-beams and how they continue to change the practice of crystallography. PMID:23021872
Wider-Opening Dewar Flasks for Cryogenic Storage
NASA Technical Reports Server (NTRS)
Ruemmele, Warren P.; Manry, John; Stafford, Kristin; Bue, Grant; Krejci, John; Evernden, Bent
2010-01-01
Dewar flasks have been proposed as containers for relatively long-term (25 days) storage of perishable scientific samples or other perishable objects at a temperature of 175 C. The refrigeration would be maintained through slow boiling of liquid nitrogen (LN2). For the purposes of the application for which these containers were proposed, (1) the neck openings of commercial off-the-shelf (COTS) Dewar flasks are too small for most NASA samples; (2) the round shapes of the COTS containers give rise to unacceptably low efficiency of packing in rectangular cargo compartments; and (3) the COTS containers include metal structures that are too thermally conductive, such that they cannot, without exceeding size and weight limits, hold enough LN2 for the required long-term-storage. In comparison with COTS Dewar flasks, the proposed containers would be rectangular, yet would satisfy the long-term storage requirement without exceeding size and weight limits; would have larger neck openings; and would have greater sample volumes, leading to a packing efficiency of about double the sample volume as a fraction of total volume. The proposed containers would be made partly of aerospace- type composite materials and would include vacuum walls, multilayer insulation, and aerogel insulation.
NASA Technical Reports Server (NTRS)
Merritt, Dawn A.; Hayes, J. M.; Des Marais, David J.
1995-01-01
Less than 15 min are required for the determination of delta C(sub PDB)-13 with a precision of 0.2 ppt(1 sigma, single measurement) in 5-mL samples of air containing CH4 at natural levels (1.7 ppm). An analytical system including a sample-introduction unit incorporating a preparative gas chromatograph (GC) column for separation of CH4 from N2, O2, and Ar is described. The 15-min procedure includes time for operation of that system, high-resolution chromatographic separation of the CH4, on-line combustion and purification of the products, and isotopic calibration. Analyses of standards demonstrate that systematic errors are absent and that there is no dependence of observed values of delta on sample size. For samples containing 100 ppm or more CH4, preconcentration is not required and the analysis time is less than 5 min. The system utilizes a commercially available, high-sensitivity isotope-ratio mass spectrometer. For optimal conditions of smaple handling and combustion, performance of the system is within a factor of 2 of the shot-noise limit. The potential exists therefore for analysis of samples as small as 15 pmol CH4 with a standard deviation of less than 1 ppt.
Chang, Yu-Wei; Tsong, Yi; Zhao, Zhigen
2017-01-01
Assessing equivalence or similarity has drawn much attention recently as many drug products have lost or will lose their patents in the next few years, especially certain best-selling biologics. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is well established from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a three-arm equivalence trial. The primary endpoints of a clinical trial may not always be continuous, but may be discrete. In this paper, the authors derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints. In addition, the authors examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, the authors demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter.
NASA Astrophysics Data System (ADS)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; Zhang, Guannan; Ye, Ming; Wu, Jianfeng; Wu, Jichun
2017-12-01
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we develop a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.
Radlinski, A.P.; Mastalerz, Maria; Hinde, A.L.; Hainbuchner, M.; Rauch, H.; Baron, M.; Lin, J.S.; Fan, L.; Thiyagarajan, P.
2004-01-01
This paper discusses the applicability of small angle X-ray scattering (SAXS) and small angle neutron scattering (SANS) techniques for determining the porosity, pore size distribution and internal specific surface area in coals. The method is noninvasive, fast, inexpensive and does not require complex sample preparation. It uses coal grains of about 0.8 mm size mounted in standard pellets as used for petrographic studies. Assuming spherical pore geometry, the scattering data are converted into the pore size distribution in the size range 1 nm (10 A??) to 20 ??m (200,000 A??) in diameter, accounting for both open and closed pores. FTIR as well as SAXS and SANS data for seven samples of oriented whole coals and corresponding pellets with vitrinite reflectance (Ro) values in the range 0.55% to 5.15% are presented and analyzed. Our results demonstrate that pellets adequately represent the average microstructure of coal samples. The scattering data have been used to calculate the maximum surface area available for methane adsorption. Total porosity as percentage of sample volume is calculated and compared with worldwide trends. By demonstrating the applicability of SAXS and SANS techniques to determine the porosity, pore size distribution and surface area in coals, we provide a new and efficient tool, which can be used for any type of coal sample, from a thin slice to a representative sample of a thick seam. ?? 2004 Elsevier B.V. All rights reserved.
Naval Medical Research and Development News. Volume 7, Issue 9
2015-09-01
satisfaction with the simulated training; career intentions; and, general, occupational, and task-specific self-efficacy using pretest and post - test ...samples needed to be transported to the labs for testing . What was needed was a rapid, on -site, diagnostic test that could be done quickly. "The U.S...relatively small size of the group -- usually only a handful of people per deployment - required members to juggle multiple tasks on their own, including
Translations on Environmental Quality, Number 150
1977-10-17
again and again.... I recently did some calculations and it turned out that in 10 years the supplementary pay for those sick, the costs of treatment ...They required hospital treatment for a month and a half and 3 weeks respectively. At the Pal Heim Hospital "We have treated three small children...instruments showed an accumula- tion of lead in urine samples. After treatment , we released them in good condition. We received another three children during
Aditya, Kaustav; Sud, U. C.
2018-01-01
Poverty affects many people, but the ramifications and impacts affect all aspects of society. Information about the incidence of poverty is therefore an important parameter of the population for policy analysis and decision making. In order to provide specific, targeted solutions when addressing poverty disadvantage small area statistics are needed. Surveys are typically designed and planned to produce reliable estimates of population characteristics of interest mainly at higher geographic area such as national and state level. Sample sizes are usually not large enough to provide reliable estimates for disaggregated analysis. In many instances estimates are required for areas of the population for which the survey providing the data was unplanned. Then, for areas with small sample sizes, direct survey estimation of population characteristics based only on the data available from the particular area tends to be unreliable. This paper describes an application of small area estimation (SAE) approach to improve the precision of estimates of poverty incidence at district level in the State of Bihar in India by linking data from the Household Consumer Expenditure Survey 2011–12 of NSSO and the Population Census 2011. The results show that the district level estimates generated by SAE method are more precise and representative. In contrast, the direct survey estimates based on survey data alone are less stable. PMID:29879202
Chandra, Hukum; Aditya, Kaustav; Sud, U C
2018-01-01
Poverty affects many people, but the ramifications and impacts affect all aspects of society. Information about the incidence of poverty is therefore an important parameter of the population for policy analysis and decision making. In order to provide specific, targeted solutions when addressing poverty disadvantage small area statistics are needed. Surveys are typically designed and planned to produce reliable estimates of population characteristics of interest mainly at higher geographic area such as national and state level. Sample sizes are usually not large enough to provide reliable estimates for disaggregated analysis. In many instances estimates are required for areas of the population for which the survey providing the data was unplanned. Then, for areas with small sample sizes, direct survey estimation of population characteristics based only on the data available from the particular area tends to be unreliable. This paper describes an application of small area estimation (SAE) approach to improve the precision of estimates of poverty incidence at district level in the State of Bihar in India by linking data from the Household Consumer Expenditure Survey 2011-12 of NSSO and the Population Census 2011. The results show that the district level estimates generated by SAE method are more precise and representative. In contrast, the direct survey estimates based on survey data alone are less stable.
Zettl, Thomas; Mathew, Rebecca S.; Seifert, Sönke; ...
2016-05-31
Accurate determination of molecular distances is fundamental to understanding the structure, dynamics, and conformational ensembles of biological macromolecules. Here we present a method to determine the full,distance,distribution between small (~7 Å) gold labels attached to macromolecules with very high-precision(≤1 Å) and on an absolute distance scale. Our method uses anomalous small-angle X-ray scattering close to a gold absorption edge to separate the gold-gold interference pattern from other scattering contributions. Results for 10-30 bp DNA constructs achieve excellent signal-to-noise and are in good agreement with previous results obtained by single-energy,SAXS measurements without requiring the preparation and measurement of single labeled andmore » unlabeled samples. Finally, the use of small gold labels in combination with ASAXS read out provides an attractive approach to determining molecular distance distributions that will be applicable to a broad range of macromolecular systems.« less
NASA Technical Reports Server (NTRS)
1976-01-01
The author has identified the following significant results. Results indicated that the LANDSAT data and the classification technology can estimate the small grains area within a sample segment accurately and reliably enough to meet the LACIE goals. Overall, the LACIE estimates in a 9 x 11 kilometer segment agree well with ground and aircraft determined area within these segments. The estimated c.v. of the random classification error was acceptably small. These analyses confirmed that bias introduced by various factors, such as LANDSAT spatial resolution, lack of spectral resolution, classifier bias, and repeatability, was not excessive in terms of the required performance criterion. Results of these tests did indicate a difficulty in differentiating wheat from other closely related small grains. However, satisfactory wheat area estimates were obtained through the reduction of the small grain area estimates in accordance with relative amounts of these crops as determined from historic data; these procedures are being further refined.
Matching algorithm of missile tail flame based on back-propagation neural network
NASA Astrophysics Data System (ADS)
Huang, Da; Huang, Shucai; Tang, Yidong; Zhao, Wei; Cao, Wenhuan
2018-02-01
This work presents a spectral matching algorithm of missile plume detection that based on neural network. The radiation value of the characteristic spectrum of the missile tail flame is taken as the input of the network. The network's structure including the number of nodes and layers is determined according to the number of characteristic spectral bands and missile types. We can get the network weight matrixes and threshold vectors through training the network using training samples, and we can determine the performance of the network through testing the network using the test samples. A small amount of data cause the network has the advantages of simple structure and practicality. Network structure composed of weight matrix and threshold vector can complete task of spectrum matching without large database support. Network can achieve real-time requirements with a small quantity of data. Experiment results show that the algorithm has the ability to match the precise spectrum and strong robustness.
Light chain typing of immunoglobulins in small samples of biological material
Rádl, J.
1970-01-01
A method is described for the typing of the light chains of immunoglobulins in small samples of sera or external secretions and without their previous isolation. It consists of immunoelectrophoresis in agar plates which contain specific antisera against one of the light chain types. All immunoglobulins of this type are thus selected by precipitation in the central area during the electrophoretic phase. Immunoglobulins of the opposite light chain type diffuse through the agar and react with the class specific antisera from the troughs. This results in the precipitin lines as in conventional immunoelectrophoresis. This technique has proved most useful for typing heterogenous or homogeneous immunoglobulins in normal and low concentration. The antisera used for incorporation in the agar should fulfil special requirements. They should contain a high level of antibodies against common surface determinants of the immunoglobulin light chains. The further possibilities of this immunoselection technique for typing different protein mixtures is discussed. ImagesFIG. 1FIG. 2FIG. 3FIG. 4FIG. 5FIG. 6 PMID:4098592
NASA Astrophysics Data System (ADS)
Chen, Yu-Chih; Cheng, Yu-Heng; Ingram, Patrick; Yoon, Euisik
2016-06-01
Proteolytic degradation of the extracellular matrix (ECM) is critical in cancer invasion, and recent work suggests that heterogeneous cancer populations cooperate in this process. Despite the importance of cell heterogeneity, conventional proteolytic assays measure average activity, requiring thousands of cells and providing limited information about heterogeneity and dynamics. Here, we developed a microfluidic platform that provides high-efficiency cell loading and simple valveless isolation, so the proteolytic activity of a small sample (10-100 cells) can be easily characterized. Combined with a single cell derived (clonal) sphere formation platform, we have successfully demonstrated the importance of microenvironmental cues for proteolytic activity and also investigated the difference between clones. Furthermore, the platform allows monitoring single cells at multiple time points, unveiling different cancer cell line dynamics in proteolytic activity. The presented tool facilitates single cell proteolytic analysis using small samples, and our findings illuminate the heterogeneous and dynamic nature of proteolytic activity.
Mars Sample Return and Flight Test of a Small Bimodal Nuclear Rocket and ISRU Plant
NASA Technical Reports Server (NTRS)
George, Jeffrey A.; Wolinsky, Jason J.; Bilyeu, Michael B.; Scott, John H.
2014-01-01
A combined Nuclear Thermal Rocket (NTR) flight test and Mars Sample Return mission (MSR) is explored as a means of "jump-starting" NTR development. Development of a small-scale engine with relevant fuel and performance could more affordably and quickly "pathfind" the way to larger scale engines. A flight test with subsequent inflight postirradiation evaluation may also be more affordable and expedient compared to ground testing and associated facilities and approvals. Mission trades and a reference scenario based upon a single expendable launch vehicle (ELV) are discussed. A novel "single stack" spacecraft/lander/ascent vehicle concept is described configured around a "top-mounted" downward firing NTR, reusable common tank, and "bottom-mount" bus, payload and landing gear. Requirements for a hypothetical NTR engine are described that would be capable of direct thermal propulsion with either hydrogen or methane propellant, and modest electrical power generation during cruise and Mars surface insitu resource utilization (ISRU) propellant production.
Optical determination of crystal phase in semiconductor nanocrystals
Lim, Sung Jun; Schleife, André; Smith, Andrew M.
2017-01-01
Optical, electronic and structural properties of nanocrystals fundamentally derive from crystal phase. This is especially important for polymorphic II–VI, III–V and I-III-VI2 semiconductor materials such as cadmium selenide, which exist as two stable phases, cubic and hexagonal, each with distinct properties. However, standard crystallographic characterization through diffraction yields ambiguous phase signatures when nanocrystals are small or polytypic. Moreover, diffraction methods are low-throughput, incompatible with solution samples and require large sample quantities. Here we report the identification of unambiguous optical signatures of cubic and hexagonal phases in II–VI nanocrystals using absorption spectroscopy and first-principles electronic-structure theory. High-energy spectral features allow rapid identification of phase, even in small nanocrystals (∼2 nm), and may help predict polytypic nanocrystals from differential phase contributions. These theoretical and experimental insights provide simple and accurate optical crystallographic analysis for liquid-dispersed nanomaterials, to improve the precision of nanocrystal engineering and improve our understanding of nanocrystal reactions. PMID:28513577
Leroux, M.; Kihlstrom, K. J.; Holleis, S.; ...
2015-11-09
Here, we demonstrate that 3.5-MeV oxygen irradiation can markedly enhance the in-field critical current of commercial second generation superconducting tapes with an exposure time of just 1 s per 0.8 cm 2. Furthermore we demonstrate how speed is now at the level required for an industrial reel-to-reel post-processing. The irradiation is made on production line samples through the protective silver coating and does not require any modification of the growth process. From TEM imaging, we identify small clusters as the main source of increased vortex pinning.
Technical assistance for law-enforcement communications: Case study report two
NASA Technical Reports Server (NTRS)
Reilly, N. B.; Mustain, J. A.
1979-01-01
Two case histories are presented. In one study the feasibility of consolidating dispatch center operations for small agencies is considered. System load measurements were taken and queueing analysis applied to determine numbers of personnel required for each separate agency and for a consolidated dispatch center. Functional requirements were developed and a cost model was designed to compare relative costs of various alternatives including continuation of the present system, consolidation of a manual system, and consolidated computer-aided dispatching. The second case history deals with the consideration of a multi-regional, intrastate radio frequency for improved interregional communications. Sample standards and specifications for radio equipment are provided.
Advanced sampling techniques for hand-held FT-IR instrumentation
NASA Astrophysics Data System (ADS)
Arnó, Josep; Frunzi, Michael; Weber, Chris; Levy, Dustin
2013-05-01
FT-IR spectroscopy is the technology of choice to identify solid and liquid phase unknown samples. The challenging ConOps in emergency response and military field applications require a significant redesign of the stationary FT-IR bench-top instruments typically used in laboratories. Specifically, field portable units require high levels of resistance against mechanical shock and chemical attack, ease of use in restrictive gear, extreme reliability, quick and easy interpretation of results, and reduced size. In the last 20 years, FT-IR instruments have been re-engineered to fit in small suitcases for field portable use and recently further miniaturized for handheld operation. This article introduces the HazMatID™ Elite, a FT-IR instrument designed to balance the portability advantages of a handheld device with the performance challenges associated with miniaturization. In this paper, special focus will be given to the HazMatID Elite's sampling interfaces optimized to collect and interrogate different types of samples: accumulated material using the on-board ATR press, dispersed powders using the ClearSampler™ tool, and the touch-to-sample sensor for direct liquid sampling. The application of the novel sample swipe accessory (ClearSampler) to collect material from surfaces will be discussed in some detail. The accessory was tested and evaluated for the detection of explosive residues before and after detonation. Experimental results derived from these investigations will be described in an effort to outline the advantages of this technology over existing sampling methods.
Opportunities and Challenges of Linking Scientific Core Samples to the Geoscience Data Ecosystem
NASA Astrophysics Data System (ADS)
Noren, A. J.
2016-12-01
Core samples generated in scientific drilling and coring are critical for the advancement of the Earth Sciences. The scientific themes enabled by analysis of these samples are diverse, and include plate tectonics, ocean circulation, Earth-life system interactions (paleoclimate, paleobiology, paleoanthropology), Critical Zone processes, geothermal systems, deep biosphere, and many others, and substantial resources are invested in their collection and analysis. Linking core samples to researchers, datasets, publications, and funding agencies through registration of globally unique identifiers such as International Geo Sample Numbers (IGSNs) offers great potential for advancing several frontiers. These include maximizing sample discoverability, access, reuse, and return on investment; a means for credit to researchers; and documentation of project outputs to funding agencies. Thousands of kilometers of core samples and billions of derivative subsamples have been generated through thousands of investigators' projects, yet the vast majority of these samples are curated at only a small number of facilities. These numbers, combined with the substantial similarity in sample types, make core samples a compelling target for IGSN implementation. However, differences between core sample communities and other geoscience disciplines continue to create barriers to implementation. Core samples involve parent-child relationships spanning 8 or more generations, an exponential increase in sample numbers between levels in the hierarchy, concepts related to depth/position in the sample, requirements for associating data derived from core scanning and lithologic description with data derived from subsample analysis, and publications based on tens of thousands of co-registered scan data points and thousands of analyses of subsamples. These characteristics require specialized resources for accurate and consistent assignment of IGSNs, and a community of practice to establish norms, workflows, and infrastructure to support implementation.
Rain Erosion Studies of Sapphire, Aluminum Oxynitride, Spinel, Lanthana- Doped Yttria, and TAF Glass
1990-07-01
small , there is little change in average scatter for any material in any test. CONCLUSIONS AND DISCUSSION The principal conclusions are 1. ALON...20 Sample broke erosion damage 10 Slight pitting, 20 No change erosion damage 15 Pitting, cratering, 20 Small surface pits erosion damage 15 Pitting...Sample broke 10 No damage 15 Sample pitted, small edge fracture 15 Slight pitting, 1 crater, 20 Sample pitted, erosion damage small edge fracture 15 SUght
Paper-based Devices for Isolation and Characterization of Extracellular Vesicles
Chen, Chihchen; Lin, Bo-Ren; Hsu, Min-Yen; Cheng, Chao-Min
2015-01-01
Extracellular vesicles (EVs), membranous particles released from various types of cells, hold a great potential for clinical applications. They contain nucleic acid and protein cargo and are increasingly recognized as a means of intercellular communication utilized by both eukaryote and prokaryote cells. However, due to their small size, current protocols for isolation of EVs are often time consuming, cumbersome, and require large sample volumes and expensive equipment, such as an ultracentrifuge. To address these limitations, we developed a paper-based immunoaffinity platform for separating subgroups of EVs that is easy, efficient, and requires sample volumes as low as 10 μl. Biological samples can be pipetted directly onto paper test zones that have been chemically modified with capture molecules that have high affinity to specific EV surface markers. We validate the assay by using scanning electron microscopy (SEM), paper-based enzyme-linked immunosorbent assays (P-ELISA), and transcriptome analysis. These paper-based devices will enable the study of EVs in the clinic and the research setting to help advance our understanding of EV functions in health and disease. PMID:25867034
Bayesian focalization: quantifying source localization with environmental uncertainty.
Dosso, Stan E; Wilmut, Michael J
2007-05-01
This paper applies a Bayesian formulation to study ocean acoustic source localization as a function of uncertainty in environmental properties (water column and seabed) and of data information content [signal-to-noise ratio (SNR) and number of frequencies]. The approach follows that of the optimum uncertain field processor [A. M. Richardson and L. W. Nolte, J. Acoust. Soc. Am. 89, 2280-2284 (1991)], in that localization uncertainty is quantified by joint marginal probability distributions for source range and depth integrated over uncertain environmental properties. The integration is carried out here using Metropolis Gibbs' sampling for environmental parameters and heat-bath Gibbs' sampling for source location to provide efficient sampling over complicated parameter spaces. The approach is applied to acoustic data from a shallow-water site in the Mediterranean Sea where previous geoacoustic studies have been carried out. It is found that reliable localization requires a sufficient combination of prior (environmental) information and data information. For example, sources can be localized reliably for single-frequency data at low SNR (-3 dB) only with small environmental uncertainties, whereas successful localization with large environmental uncertainties requires higher SNR and/or multifrequency data.
A passive guard for low thermal conductivity measurement of small samples by the hot plate method
NASA Astrophysics Data System (ADS)
Jannot, Yves; Degiovanni, Alain; Grigorova-Moutiers, Veneta; Godefroy, Justine
2017-01-01
Hot plate methods under steady state conditions are based on a 1D model to estimate the thermal conductivity, using measurements of the temperatures T 0 and T 1 of the two sides of the sample and of the heat flux crossing it. To be consistent with the hypothesis of the 1D heat flux, either a hot plate guarded apparatus is used, or the temperature is measured at the centre of the sample. On one hand the latter method can be used only if the ratio thickness/width of the sample is sufficiently low and on the other hand the guarded hot plate method requires large width samples (typical cross section of 0.6 × 0.6 m2). That is why both methods cannot be used for low width samples. The method presented in this paper is based on an optimal choice of the temperatures T 0 and T 1 compared to the ambient temperature T a, enabling the estimation of the thermal conductivity with a centered hot plate method, by applying the 1D heat flux model. It will be shown that these optimal values do not depend on the size or on the thermal conductivity of samples (in the range 0.015-0.2 W m-1 K-1), but only on T a. The experimental results obtained validate the method for several reference samples for values of the ratio thickness/width up to 0.3, thus enabling the measurement of the thermal conductivity of samples having a small cross-section, down to 0.045 × 0.045 m2.
Nakagawa, Seiji
2011-04-01
Mechanical properties (seismic velocities and attenuation) of geological materials are often frequency dependent, which necessitates measurements of the properties at frequencies relevant to a problem at hand. Conventional acoustic resonant bar tests allow measuring seismic properties of rocks and sediments at sonic frequencies (several kilohertz) that are close to the frequencies employed for geophysical exploration of oil and gas resources. However, the tests require a long, slender sample, which is often difficult to obtain from the deep subsurface or from weak and fractured geological formations. In this paper, an alternative measurement technique to conventional resonant bar tests is presented. This technique uses only a small, jacketed rock or sediment core sample mediating a pair of long, metal extension bars with attached seismic source and receiver-the same geometry as the split Hopkinson pressure bar test for large-strain, dynamic impact experiments. Because of the length and mass added to the sample, the resonance frequency of the entire system can be lowered significantly, compared to the sample alone. The experiment can be conducted under elevated confining pressures up to tens of MPa and temperatures above 100 [ordinal indicator, masculine]C, and concurrently with x-ray CT imaging. The described split Hopkinson resonant bar test is applied in two steps. First, extension and torsion-mode resonance frequencies and attenuation of the entire system are measured. Next, numerical inversions for the complex Young's and shear moduli of the sample are performed. One particularly important step is the correction of the inverted Young's moduli for the effect of sample-rod interfaces. Examples of the application are given for homogeneous, isotropic polymer samples, and a natural rock sample. © 2011 American Institute of Physics
Gras, Ronda; Luong, Jim; Shellie, Robert A
2015-11-17
We introduce a technique for the direct measurement of elemental mercury in light hydrocarbons such as natural gas. We determined elemental mercury at the parts-per-trillion level with high precision [<3% RSD (n = 20 manual injection)] using gas chromatography with ultraviolet photometric detection (GC-UV) at 254 nm. Our approach requires a small sample volume (1 mL) and does not rely on any form of sample preconcentration. The GC-UV separation employs an inert divinylbenzene porous layer open tubular column set to separate mercury from other components in the sample matrix. We incorporated a 10-port gas-sampling valve in the GC-UV system, which enables automated sampling, as well as back flushing capability to enhance system cleanliness and sample throughput. Total analysis time is <2 min, and the procedure is linear over a range of 2-83 μg/m(3) [correlation coefficient of R(2) = 0.998] with a measured recovery of >98% over this range.
Seven common mistakes in population genetics and how to avoid them.
Meirmans, Patrick G
2015-07-01
As the data resulting from modern genotyping tools are astoundingly complex, genotyping studies require great care in the sampling design, genotyping, data analysis and interpretation. Such care is necessary because, with data sets containing thousands of loci, small biases can easily become strongly significant patterns. Such biases may already be present in routine tasks that are present in almost every genotyping study. Here, I discuss seven common mistakes that can be frequently encountered in the genotyping literature: (i) giving more attention to genotyping than to sampling, (ii) failing to perform or report experimental randomization in the laboratory, (iii) equating geopolitical borders with biological borders, (iv) testing significance of clustering output, (v) misinterpreting Mantel's r statistic, (vi) only interpreting a single value of k and (vii) forgetting that only a small portion of the genome will be associated with climate. For every of those issues, I give some suggestions how to avoid the mistake. Overall, I argue that genotyping studies would benefit from establishing a more rigorous experimental design, involving proper sampling design, randomization and better distinction of a priori hypotheses and exploratory analyses. © 2015 John Wiley & Sons Ltd.
Rochelle-Newall, Emma J; Ribolzi, Olivier; Viguier, Marion; Thammahacksa, Chanthamousone; Silvera, Norbert; Latsachack, Keooudone; Dinh, Rinh Pham; Naporn, Piyapong; Sy, Hai Tran; Soulileuth, Bounsamay; Hmaimum, Nikom; Sisouvanh, Pem; Robain, Henri; Janeau, Jean-Louis; Valentin, Christian; Boithias, Laurie; Pierret, Alain
2016-09-08
Lack of access to clean water and adequate sanitation continues to be a major brake on development. Here we present the results of a 12-month investigation into the dynamics of Escherichia coli, a commonly used indicator of faecal contamination in water supplies, in three small, rural catchments in Laos, Thailand and Vietnam. We show that land use and hydrology are major controlling factors of E. coli concentrations in streamwater and that the relative importance of these two factors varies between the dry and wet seasons. In all three catchments, the highest concentrations were observed during the wet season when storm events and overland flow were highest. However, smaller peaks of E. coli concentration were also observed during the dry season. These latter correspond to periods of intense farming activities and small, episodic rain events. Furthermore, vegetation type, through land use and soil surface crusting, combined with mammalian presence play an important role in determining E. coli loads in the streams. Finally, sampling during stormflow revealed the importance of having appropriate sampling protocols if information on maximum contamination levels is required as grab sampling at a fixed time step may miss important peaks in E. coli numbers.
NASA Astrophysics Data System (ADS)
Rochelle-Newall, Emma J.; Ribolzi, Olivier; Viguier, Marion; Thammahacksa, Chanthamousone; Silvera, Norbert; Latsachack, Keooudone; Dinh, Rinh Pham; Naporn, Piyapong; Sy, Hai Tran; Soulileuth, Bounsamay; Hmaimum, Nikom; Sisouvanh, Pem; Robain, Henri; Janeau, Jean-Louis; Valentin, Christian; Boithias, Laurie; Pierret, Alain
2016-09-01
Lack of access to clean water and adequate sanitation continues to be a major brake on development. Here we present the results of a 12-month investigation into the dynamics of Escherichia coli, a commonly used indicator of faecal contamination in water supplies, in three small, rural catchments in Laos, Thailand and Vietnam. We show that land use and hydrology are major controlling factors of E. coli concentrations in streamwater and that the relative importance of these two factors varies between the dry and wet seasons. In all three catchments, the highest concentrations were observed during the wet season when storm events and overland flow were highest. However, smaller peaks of E. coli concentration were also observed during the dry season. These latter correspond to periods of intense farming activities and small, episodic rain events. Furthermore, vegetation type, through land use and soil surface crusting, combined with mammalian presence play an important role in determining E. coli loads in the streams. Finally, sampling during stormflow revealed the importance of having appropriate sampling protocols if information on maximum contamination levels is required as grab sampling at a fixed time step may miss important peaks in E. coli numbers.
Applied 3D printing for microscopy in health science research
NASA Astrophysics Data System (ADS)
Brideau, Craig; Zareinia, Kourosh; Stys, Peter
2015-03-01
The rapid prototyping capability offered by 3D printing is considered advantageous for commercial applications. However, the ability to quickly produce precision custom devices is highly beneficial in the research laboratory setting as well. Biological laboratories require the manipulation and analysis of delicate living samples, thus the ability to create custom holders, support equipment, and adapters allow the extension of existing laboratory machines. Applications include camera adapters and stage sample holders for microscopes, surgical guides for tissue preparation, and small precision tools customized to unique specifications. Where high precision is needed, especially the reproduction of fine features, a printer with a high resolution is needed. However, the introduction of cheaper, lower resolution commercial printers have been shown to be more than adequate for less demanding projects. For direct manipulation of delicate samples, biocompatible raw materials are often required, complicating the printing process. This paper will examine some examples of 3D-printed objects for laboratory use, and provide an overview of the requirements for 3D printing for this application. Materials, printing resolution, production, and ease of use will all be reviewed with an eye to producing better printers and techniques for laboratory applications. Specific case studies will highlight applications for 3D-printed devices in live animal imaging for both microscopy and Magnetic Resonance Imaging.
Isotope effect of mercury diffusion in air
Koster van Groos, Paul G.; Esser, Bradley K.; Williams, Ross W.; Hunt, James R.
2014-01-01
Identifying and reducing impacts from mercury sources in the environment remains a considerable challenge and requires process based models to quantify mercury stocks and flows. The stable isotope composition of mercury in environmental samples can help address this challenge by serving as a tracer of specific sources and processes. Mercury isotope variations are small and result only from isotope fractionation during transport, equilibrium, and transformation processes. Because these processes occur in both industrial and environmental settings, knowledge of their associated isotope effects is required to interpret mercury isotope data. To improve the mechanistic modeling of mercury isotope effects during gas phase diffusion, an experimental program tested the applicability of kinetic gas theory. Gas-phase elemental mercury diffusion through small bore needles from finite sources demonstrated mass dependent diffusivities leading to isotope fractionation described by a Rayleigh distillation model. The measured relative atomic diffusivities among mercury isotopes in air are large and in agreement with kinetic gas theory. Mercury diffusion in air offers a reasonable explanation of recent field results reported in the literature. PMID:24364380
Fully-reversible optical sensor for hydrogen peroxide with fast response.
Ding, Longjiang; Chen, Siyu; Zhang, Wei; Zhang, Yinglu; Wang, Xu-Dong
2018-05-09
A fully reversible optical sensor for hydrogen peroxide with fast response is presented. The sensor was fabricated by in-situ growing ultra-small platinum nanoparticles (PtNPs) inside the pores of fibrous silica particles (KCC-1). The nanocomposite was then embedded into a hydrogel matrix and form a sensor layer, the immobilized PtNPs can catalytically convert hydrogen peroxide into molecular oxygen, which is measured via luminescent quenching based oxygen sensor underneath. Owing to the high porosity and permeability of KCC-1 and high local concentration of PtNPs, the sensor exhibits fast response (less than 1 min) and full reversibility. The measurement range of the sensor covers 1.0 μM to 10.0 mM, and very small amount of sample is required during measurement (200 μL). Because of its high stability, excellent reversibility and selectivity, and extremely fast response, the sensor could fulfill all industry requirements for real-time measurement, and fill market vacancy.
Isotope effect of mercury diffusion in air.
Koster van Groos, Paul G; Esser, Bradley K; Williams, Ross W; Hunt, James R
2014-01-01
Identifying and reducing impacts from mercury sources in the environment remains a considerable challenge and requires process based models to quantify mercury stocks and flows. The stable isotope composition of mercury in environmental samples can help address this challenge by serving as a tracer of specific sources and processes. Mercury isotope variations are small and result only from isotope fractionation during transport, equilibrium, and transformation processes. Because these processes occur in both industrial and environmental settings, knowledge of their associated isotope effects is required to interpret mercury isotope data. To improve the mechanistic modeling of mercury isotope effects during gas phase diffusion, an experimental program tested the applicability of kinetic gas theory. Gas-phase elemental mercury diffusion through small bore needles from finite sources demonstrated mass dependent diffusivities leading to isotope fractionation described by a Rayleigh distillation model. The measured relative atomic diffusivities among mercury isotopes in air are large and in agreement with kinetic gas theory. Mercury diffusion in air offers a reasonable explanation of recent field results reported in the literature.
Whispering gallery mode resonators for rapid label-free biosensing in small volume droplets.
Wildgen, Sarah M; Dunn, Robert C
2015-03-23
Rapid biosensing requires fast mass transport of the analyte to the surface of the sensing element. To optimize analysis times, both mass transport in solution and the geometry and size of the sensing element need to be considered. Small dielectric spheres, tens of microns in diameter, can act as label-free biosensors using whispering gallery mode (WGM) resonances. WGM resonances are sensitive to the effective refractive index, which changes upon analyte binding to recognition sites on functionalized resonators. The spherical geometry and tens of microns diameter of these resonators provides an efficient target for sensing while their compact size enables detection in limited volumes. Here, we explore conditions leading to rapid analyte detection using WGM resonators as label-free sensors in 10 μL sample droplets. Droplet evaporation leads to potentially useful convective mixing, but also limits the time over which analysis can be completed. We show that active droplet mixing combined with initial binding rate measurements is required for accurate nanomolar protein quantification within the first minute following injection.
NASA Astrophysics Data System (ADS)
Dirilgen, Tara; Juceviča, Edite; Melecis, Viesturs; Querner, Pascal; Bolger, Thomas
2018-01-01
The relative importance of niche separation, non-equilibrial and neutral models of community assembly has been a theme in community ecology for many decades with none appearing to be applicable under all circumstances. In this study, Collembola species abundances were recorded over eleven consecutive years in a spatially explicit grid and used to examine (i) whether observed beta diversity differed from that expected under conditions of neutrality, (ii) whether sampling points differed in their relative contributions to overall beta diversity, and (iii) the number of samples required to provide comparable estimates of species richness across three forest sites. Neutrality could not be rejected for 26 of the forest by year combinations. However, there is a trend toward greater structure in the oldest forest, where beta diversity was greater than predicted by neutrality on five of the eleven sampling dates. The lack of difference in individual- and sample-based rarefaction curves also suggests randomness in the system at this particular scale of investigation. It seems that Collembola communities are not spatially aggregated and assembly is driven primarily by neutral processes particularly in the younger two sites. Whether this finding is due to small sample size or unaccounted for environmental variables cannot be determined. Variability between dates and sites illustrates the potential of drawing incorrect conclusions if data are collected at a single site and a single point in time.
Model-Based Design of Long-Distance Tracer Transport Experiments in Plants.
Bühler, Jonas; von Lieres, Eric; Huber, Gregor J
2018-01-01
Studies of long-distance transport of tracer isotopes in plants offer a high potential for functional phenotyping, but so far measurement time is a bottleneck because continuous time series of at least 1 h are required to obtain reliable estimates of transport properties. Hence, usual throughput values are between 0.5 and 1 samples h -1 . Here, we propose to increase sample throughput by introducing temporal gaps in the data acquisition of each plant sample and measuring multiple plants one after each other in a rotating scheme. In contrast to common time series analysis methods, mechanistic tracer transport models allow the analysis of interrupted time series. The uncertainties of the model parameter estimates are used as a measure of how much information was lost compared to complete time series. A case study was set up to systematically investigate different experimental schedules for different throughput scenarios ranging from 1 to 12 samples h -1 . Selected designs with only a small amount of data points were found to be sufficient for an adequate parameter estimation, implying that the presented approach enables a substantial increase of sample throughput. The presented general framework for automated generation and evaluation of experimental schedules allows the determination of a maximal sample throughput and the respective optimal measurement schedule depending on the required statistical reliability of data acquired by future experiments.
English, Karin; Chen, Yiqin; Toms, Leisa-Maree; Jagals, Paul; Ware, Robert S; Mueller, Jochen F; Sly, Peter D
2017-10-01
The aim of our study was to investigate children's exposure to the flame retardants polybrominated diphenyl ethers (PBDEs) by analysing faecal content, a non-invasive matrix, as well as responses to an exposure-assessment questionnaire. A convenience sample of 61 parents with children (aged >3 months to <2 years) completed an online pre-tested questionnaire and provided faecal samples for analysis by high resolution gas chromatography/mass spectrometry. BDE-209 was the dominant congener in faecal samples adjusted to 8.3ng/g dry weight (dw), with >80% samples above the limit of detection (LOD). BDE-47 (0.23ng/g dw) and BDE-153 (0.03ng/g dw) were each detected above the LOD in approximately 60% of samples. Age was associated with BDE-47 (-7%/month) and BDE-153 (-12%/month) concentrations in faeces, but not BDE-209. Other variables associated with PBDE concentrations included features of the home (carpet, pets) and behaviour (hand-to-mouth, removing shoes, using a car sunshade, frequency of walks outdoors). However, given the small sample size of this study additional research is required to confirm these findings. In this study we demonstrated that faeces may be a viable alternative to monitor human exposure to PBDEs, but further validation studies are required. Copyright © 2017. Published by Elsevier Inc.
The Indigo V Indian Ocean Expedition: a prototype for citizen microbial oceanography
NASA Astrophysics Data System (ADS)
Lauro, Federico; Senstius, Jacob; Cullen, Jay; Lauro, Rachelle; Neches, Russell; Grzymski, Joseph
2014-05-01
Microbial Oceanography has long been an extremely expensive discipline, requiring ship time for sample collection and thereby economically constraining the number of samples collected. This is especially true for under-sampled water bodies such as the Indian Ocean. Specialised scientific equipment only adds to the costs. Moreover, long term monitoring of microbial communities and large scale modelling of global biogeochemical cycles requires the collection of high-density data both temporally and spatially in a cost-effective way. Thousands of private ocean-going vessels are cruising around the world's oceans every day. We believe that a combination of new technologies, appropriate laboratory protocols and strategic operational partnerships will allow researchers to broaden the scope of participation in basic oceanographic research. This will be achieved by equipping sailing vessels with small, satcom-equipped sampling devices, user-friendly collection techniques and a 'pre-addressed-stamped-envelope' to send in the samples for analysis. We aim to prove that 'bigger' is not necessarily 'better' and the key to greater understanding of the world's oceans is to forge the way to easier and cheaper sample acquisition. The ultimate goal of the Indigo V Expedition is to create a working blue-print for 'citizen microbial oceanography'. We will present the preliminary outcomes of the first Indigo V expedition, from Capetown to Singapore, highlighting the challenges and opportunities of such endeavours.
Micrometeoroid and Lunar Secondary Ejecta Flux Measurements: Comparison of Three Acoustic Systems
NASA Technical Reports Server (NTRS)
Corsaro, R. D.; Giovane, F.; Liou, Jer-Chyi; Burtchell, M.; Pisacane, V.; Lagakos, N.; Williams, E.; Stansbery, E.
2010-01-01
This report examines the inherent capability of three large-area acoustic sensor systems and their applicability for micrometeoroids (MM) and lunar secondary ejecta (SE) detection and characterization for future lunar exploration activities. Discussion is limited to instruments that can be fabricated and deployed with low resource requirements. Previously deployed impact detection probes typically have instrumented capture areas less than 0.2 square meters. Since the particle flux decreases rapidly with increased particle size, such small-area sensors rarely encounter particles in the size range above 50 microns, and even their sampling the population above 10 microns is typically limited. Characterizing the sparse dust population in the size range above 50 microns requires a very large-area capture instrument. However it is also important that such an instrument simultaneously measures the population of the smaller particles, so as to provide a complete instantaneous snapshot of the population. For lunar or planetary surface studies, the system constraints are significant. The instrument must be as large as possible to sample the population of the largest MM. This is needed to reliably assess the particle impact risks and to develop cost-effective shielding designs for habitats, astronauts, and critical instrument. The instrument should also have very high sensitivity to measure the flux of small and slow SE particles. is the SE environment is currently poorly characterized, and possess a contamination risk to machinery and personnel involved in exploration. Deployment also requires that the instrument add very little additional mass to the spacecraft. Three acoustic systems are being explored for this application.
Merit-Based Incentive Payment System Participation: Radiologists Can Run but Cannot Hide.
Rosenkrantz, Andrew B; Goldberg, Julia E; Duszak, Richard; Nicola, Gregory N
2018-04-01
To optimize the flexibility and relevancy of its Merit-Based Incentive Payment System (MIPS), CMS exempts selected physicians and groups from participation and grants others relaxed reporting requirements. We assess the practical implications of such special status determinations. For a random sample of 1,000 Medicare-participating radiologists, the CMS MIPS Participation Lookup Tool was manually searched. Individual radiologists' and associated groups' participation requirements and special statuses were assessed. Although only 55% of radiologists were required to participate in MIPS as individuals when considering only one associated taxpayer identification number (TIN), 83% were required to participate as individuals when considering all associated TINs. When using the group reporting option, 97% of radiology groups were required to participate. High participation requirements persisted across generalist and subspecialist radiologists, small and rural, and both academic and nonacademic practices. Non-patient-facing and hospital-based statuses were assigned to high fractions of individual radiologists (91% and 71%, respectively), but much lower fractions of group practices (72% and 25%). Rural and health professional shortage area statuses were assigned to higher percentages of groups (27% and 39%) than individuals (13% and 23%). Small practice status was assigned to 22% of individuals versus 16% of groups. Although not apparent if only considering individual radiologist-TIN combinations, the overwhelming majority of radiologists will be required to participate in MIPS, at the individual or group level. Radiology groups are strongly encouraged to review their physicians' MIPS participation requirements and special statuses to ensure optimal performance scores and payment bonuses. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Response Variability in Commercial MOSFET SEE Qualification
George, J. S.; Clymer, D. A.; Turflinger, T. L.; ...
2016-12-01
Single-event effects (SEE) evaluation of five different part types of next generation, commercial trench MOSFETs indicates large part-to-part variation in determining a safe operating area (SOA) for drain-source voltage (V DS) following a test campaign that exposed >50 samples per part type to heavy ions. These results suggest a determination of a SOA using small sample sizes may fail to capture the full extent of the part-to-part variability. An example method is discussed for establishing a Safe Operating Area using a one-sided statistical tolerance limit based on the number of test samples. Finally, burn-in is shown to be a criticalmore » factor in reducing part-to-part variation in part response. Implications for radiation qualification requirements are also explored.« less
NASA Technical Reports Server (NTRS)
Chhikara, R. S.; Perry, C. R., Jr. (Principal Investigator)
1980-01-01
The problem of determining the stratum variances required for an optimum sample allocation for remotely sensed crop surveys is investigated with emphasis on an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical statistics is developed for obtaining initial estimates of stratum variances. The procedure is applied to variance for wheat in the U.S. Great Plains and is evaluated based on the numerical results obtained. It is shown that the proposed technique is viable and performs satisfactorily with the use of a conservative value (smaller than the expected value) for the field size and with the use of crop statistics from the small political division level.
Response Variability in Commercial MOSFET SEE Qualification
DOE Office of Scientific and Technical Information (OSTI.GOV)
George, J. S.; Clymer, D. A.; Turflinger, T. L.
Single-event effects (SEE) evaluation of five different part types of next generation, commercial trench MOSFETs indicates large part-to-part variation in determining a safe operating area (SOA) for drain-source voltage (V DS) following a test campaign that exposed >50 samples per part type to heavy ions. These results suggest a determination of a SOA using small sample sizes may fail to capture the full extent of the part-to-part variability. An example method is discussed for establishing a Safe Operating Area using a one-sided statistical tolerance limit based on the number of test samples. Finally, burn-in is shown to be a criticalmore » factor in reducing part-to-part variation in part response. Implications for radiation qualification requirements are also explored.« less
NASA Astrophysics Data System (ADS)
Schroer, Christian G.; Seyrich, Martin; Kahnt, Maik; Botta, Stephan; Döhrmann, Ralph; Falkenberg, Gerald; Garrevoet, Jan; Lyubomirskiy, Mikhail; Scholz, Maria; Schropp, Andreas; Wittwer, Felix
2017-09-01
In recent years, ptychography has revolutionized x-ray microscopy in that it is able to overcome the diffraction limit of x-ray optics, pushing the spatial resolution limit down to a few nanometers. However, due to the weak interaction of x rays with matter, the detection of small features inside a sample requires a high coherent fluence on the sample, a high degree of mechanical stability, and a low background signal from the x-ray microscope. The x-ray scanning microscope PtyNAMi at PETRA III is designed for high-spatial-resolution 3D imaging with high sensitivity. The design concept is presented with a special focus on real-time metrology of the sample position during tomographic scanning microscopy.
48 CFR 970.1907-1 - Subcontracting plan requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... SUPPLEMENTARY REGULATIONS DOE MANAGEMENT AND OPERATING CONTRACTS Small, Small Disadvantaged and Women-Owned Small Business Concerns 970.1907-1 Subcontracting plan requirements. Pursuant to the clause at 48 CFR 52.219-9, Small Business Subcontracting Plan, which is required for all management and operating...
Houts, Carrie R; Edwards, Michael C; Wirth, R J; Deal, Linda S
2016-11-01
There has been a notable increase in the advocacy of using small-sample designs as an initial quantitative assessment of item and scale performance during the scale development process. This is particularly true in the development of clinical outcome assessments (COAs), where Rasch analysis has been advanced as an appropriate statistical tool for evaluating the developing COAs using a small sample. We review the benefits such methods are purported to offer from both a practical and statistical standpoint and detail several problematic areas, including both practical and statistical theory concerns, with respect to the use of quantitative methods, including Rasch-consistent methods, with small samples. The feasibility of obtaining accurate information and the potential negative impacts of misusing large-sample statistical methods with small samples during COA development are discussed.
Predicting herbicide and biocide concentrations in rivers across Switzerland
NASA Astrophysics Data System (ADS)
Wemyss, Devon; Honti, Mark; Stamm, Christian
2014-05-01
Pesticide concentrations vary strongly in space and time. Accordingly, intensive sampling is required to achieve a reliable quantification of pesticide pollution. As this requires substantial resources, loads and concentration ranges in many small and medium streams remain unknown. Here, we propose partially filling the information gap for herbicides and biocides by using a modelling approach that predicts stream concentrations without site-specific calibration simply based on generally available data like land use, discharge and nation-wide consumption data. The simple, conceptual model distinguishes herbicide losses from agricultural fields, private gardens and biocide losses from buildings (facades, roofs). The herbicide model is driven by river discharge and the applied herbicide mass; the biocide model requires precipitation and the footprint area of urban areas containing the biocide. The model approach allows for modelling concentrations across multiple catchments at the daily, or shorter, time scale and for small to medium-sized catchments (1 - 100 km2). Four high resolution sampling campaigns in the Swiss Plateau were used to calibrate the model parameters for six model compounds: atrazine, metolachlor, terbuthylazine, terbutryn, diuron and mecoprop. Five additional sampled catchments across Switzerland were used to directly compare the predicted to the measured concentrations. Analysis of the first results reveals a reasonable simulation of the concentration dynamics for specific rainfall events and across the seasons. Predicted concentration ranges are reasonable even without site-specific calibration. This indicates the transferability of the calibrated model directly to other areas. However, the results also demonstrate systematic biases in that the highest measured peaks were not attained by the model. Probable causes for these deviations are conceptual model limitations and input uncertainty (pesticide use intensity, local precipitation, etc.). Accordingly, the model will be conceptually improved. This presentation will present the model simulations and compare the performance of the original and the modified model versions. Finally, the model will be applied across approximately 50% of the catchments in the Swiss Plateau, where necessary input data is available and where the model concept can be reasonably applied.
Space-Time Smoothing of Complex Survey Data: Small Area Estimation for Child Mortality
Mercer, Laina D; Wakefield, Jon; Pantazis, Athena; Lutambi, Angelina M; Masanja, Honorati; Clark, Samuel
2016-01-01
Many people living in low and middle-income countries are not covered by civil registration and vital statistics systems. Consequently, a wide variety of other types of data including many household sample surveys are used to estimate health and population indicators. In this paper we combine data from sample surveys and demographic surveillance systems to produce small area estimates of child mortality through time. Small area estimates are necessary to understand geographical heterogeneity in health indicators when full-coverage vital statistics are not available. For this endeavor spatio-temporal smoothing is beneficial to alleviate problems of data sparsity. The use of conventional hierarchical models requires careful thought since the survey weights may need to be considered to alleviate bias due to non-random sampling and non-response. The application that motivated this work is estimation of child mortality rates in five-year time intervals in regions of Tanzania. Data come from Demographic and Health Surveys conducted over the period 1991–2010 and two demographic surveillance system sites. We derive a variance estimator of under five years child mortality that accounts for the complex survey weighting. For our application, the hierarchical models we consider include random effects for area, time and survey and we compare models using a variety of measures including the conditional predictive ordinate (CPO). The method we propose is implemented via the fast and accurate integrated nested Laplace approximation (INLA). PMID:27468328
A novel approach to the measurement of surfactant parameters in arthropod digestive juices.
Romih, Tea; Kogej, Ksenija; Drobne, Damjana
2016-05-01
In arthropods, the determination of two important parameters of digestive juices, i.e. the total surfactant concentration and the critical micelle concentration (CMC), is challenging due to small sample volumes and low surfactant concentrations. In this work, we report a successful implementation of potentiometric titrations using the surfactant ion-selective electrode (SISE) and the pyrene fluorescence method (PFM) for the determination of the total surfactant concentration and CMC in the digestive juice of terrestrial isopod crustaceans Porcellio scaber. Pooled digestive juice extracts of four (SISE) or two (PFM) animals were used per measurement run. In both cases, digestive juice extracts in 100 μL of deionized water were sufficient for one measurement run. The total surfactant concentration of P. scaber digestive juice was determined to be 9.2 ± 3.5mM and the CMC was approximately 90 μM. Our work presents an important improvement towards easy CMC determination in small volume samples in comparison with the commonly used stalagmometric technique, where much larger sample volumes are usually needed. To date, the total surfactant concentration was not measured in the digestive juices of arthropods other than Homarus vulgaris, Astacus leptodactylus and Cancer pagurus, for which complex separation and analytical techniques were required. Our results obtained by SISE and PFM therefore present the first successful quantification of surfactants and their CMC in small volumes of arthropod digestive juice without prior separation or purification techniques. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mdluli, Fezile; Thamaga-Chitja, Joyce; Schmidt, Stefan
2013-01-01
During October, November and December 2011 (when highest sales of Agri-Hub fresh produce are observed), irrigation water, compost, lettuce and spinach sampled from four different farmer cooperatives supplying the local Agri-Hub in uMbumbulu (KwaZulu-Natal, South Africa) were analyzed monthly for the presence of total and fecal coliforms and Escherichia coli using the most probable number (MPN) technique. The pH values for all irrigation water samples analyzed were within the acceptable range of 6.5–8.5 for agricultural use. Fecal coliform levels were <1,000 MPN per 100 mL irrigation water and <1,000 MPN per g of compost. The vegetables produced by Agri-Hub small-scale farmers met the requirements for total coliforms of <200/g set by the South African Department of Health at the time of sampling. E. coli MPN values for irrigation water and vegetables were below the limit of detection. In addition, the farming practices of 73 farmers were assessed via a survey. The results revealed that more than 40% of farmers used microbiologically safe tap water for irrigation and that trained farmers have a significantly better understanding of the importance of production hygiene than untrained farmers. These results reiterate the importance of interventions that build capacity in the area of food safety and hygiene of small-scale farmers for market access of formal value chains. PMID:24065036
Mdluli, Fezile; Thamaga-Chitja, Joyce; Schmidt, Stefan
2013-09-13
During October, November and December 2011 (when highest sales of Agri-Hub fresh produce are observed), irrigation water, compost, lettuce and spinach sampled from four different farmer cooperatives supplying the local Agri-Hub in uMbumbulu (KwaZulu-Natal, South Africa) were analyzed monthly for the presence of total and fecal coliforms and Escherichia coli using the most probable number (MPN) technique. The pH values for all irrigation water samples analyzed were within the acceptable range of 6.5-8.5 for agricultural use. Fecal coliform levels were <1,000 MPN per 100 mL irrigation water and <1,000 MPN per g of compost. The vegetables produced by Agri-Hub small-scale farmers met the requirements for total coliforms of <200/g set by the South African Department of Health at the time of sampling. E. coli MPN values for irrigation water and vegetables were below the limit of detection. In addition, the farming practices of 73 farmers were assessed via a survey. The results revealed that more than 40% of farmers used microbiologically safe tap water for irrigation and that trained farmers have a significantly better understanding of the importance of production hygiene than untrained farmers. These results reiterate the importance of interventions that build capacity in the area of food safety and hygiene of small-scale farmers for market access of formal value chains.
Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R
2016-04-15
A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.
Color and Vector Flow Imaging in Parallel Ultrasound With Sub-Nyquist Sampling.
Madiena, Craig; Faurie, Julia; Poree, Jonathan; Garcia, Damien; Garcia, Damien; Madiena, Craig; Faurie, Julia; Poree, Jonathan
2018-05-01
RF acquisition with a high-performance multichannel ultrasound system generates massive data sets in short periods of time, especially in "ultrafast" ultrasound when digital receive beamforming is required. Sampling at a rate four times the carrier frequency is the standard procedure since this rule complies with the Nyquist-Shannon sampling theorem and simplifies quadrature sampling. Bandpass sampling (or undersampling) outputs a bandpass signal at a rate lower than the maximal frequency without harmful aliasing. Advantages over Nyquist sampling are reduced storage volumes and data workflow, and simplified digital signal processing tasks. We used RF undersampling in color flow imaging (CFI) and vector flow imaging (VFI) to decrease data volume significantly (factor of 3 to 13 in our configurations). CFI and VFI with Nyquist and sub-Nyquist samplings were compared in vitro and in vivo. The estimate errors due to undersampling were small or marginal, which illustrates that Doppler and vector Doppler images can be correctly computed with a drastically reduced amount of RF samples. Undersampling can be a method of choice in CFI and VFI to avoid information overload and reduce data transfer and storage.
Identifying Etiological Agents Causing Diarrhea in Low Income Ecuadorian Communities
Vasco, Gabriela; Trueba, Gabriel; Atherton, Richard; Calvopiña, Manuel; Cevallos, William; Andrade, Thamara; Eguiguren, Martha; Eisenberg, Joseph N. S.
2014-01-01
Continued success in decreasing diarrheal disease burden requires targeted interventions. To develop such interventions, it is crucial to understand which pathogens cause diarrhea. Using a case-control design we tested stool samples, collected in both rural and urban Ecuador, for 15 pathogenic microorganisms. Pathogens were present in 51% of case and 27% of control samples from the urban community, and 62% of case and 18% of control samples collected from the rural community. Rotavirus and Shigellae were associated with diarrhea in the urban community; co-infections were more pathogenic than single infection; Campylobacter and Entamoeba histolytica were found in large numbers in cases and controls; and non-typhi Salmonella and enteropathogenic Escherichia coli were not found in any samples. Consistent with the Global Enteric Multicenter Study, focused in south Asia and sub-Saharan Africa, we found that in Ecuador a small group of pathogens accounted for a significant amount of the diarrheal disease burden. PMID:25048373
Pye, Kenneth; Blott, Simon J
2004-08-11
Particle size is a fundamental property of any sediment, soil or dust deposit which can provide important clues to nature and provenance. For forensic work, the particle size distribution of sometimes very small samples requires precise determination using a rapid and reliable method with a high resolution. The Coulter trade mark LS230 laser granulometer offers rapid and accurate sizing of particles in the range 0.04-2000 microm for a variety of sample types, including soils, unconsolidated sediments, dusts, powders and other particulate materials. Reliable results are possible for sample weights of just 50 mg. Discrimination between samples is performed on the basis of the shape of the particle size curves and statistical measures of the size distributions. In routine forensic work laser granulometry data can rarely be used in isolation and should be considered in combination with results from other techniques to reach an overall conclusion.
Evaluation of ultra-low background materials for uranium and thorium using ICP-MS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoppe, E. W.; Overman, N. R.; LaFerriere, B. D.
2013-08-08
An increasing number of physics experiments require low background materials for their construction. The presence of Uranium and Thorium and their progeny in these materials present a variety of unwanted background sources for these experiments. The sensitivity of the experiments continues to drive the necessary levels of detection ever lower as well. This requirement for greater sensitivity has rendered direct radioassay impractical in many cases requiring large quantities of material, frequently many kilograms, and prolonged counting times, often months. Other assay techniques have been employed such as Neutron Activation Analysis but this requires access to expensive facilities and instrumentation andmore » can be further complicated and delayed by the formation of unwanted radionuclides. Inductively Coupled Plasma Mass Spectrometry (ICP-MS) is a useful tool and recent advancements have increased the sensitivity particularly in the elemental high mass range of U and Th. Unlike direct radioassay, ICP-MS is a destructive technique since it requires the sample to be in liquid form which is aspirated into a high temperature plasma. But it benefits in that it usually requires a very small sample, typically about a gram. This paper discusses how a variety of low background materials such as copper, polymers, and fused silica are made amenable to ICP-MS assay and how the arduous task of maintaining low backgrounds of U and Th is achieved.« less
Evaluation of Ultra-Low Background Materials for Uranium and Thorium Using ICP-MS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoppe, Eric W.; Overman, Nicole R.; LaFerriere, Brian D.
2013-08-08
An increasing number of physics experiments require low background materials for their construction. The presence of Uranium and Thorium and their progeny in these materials present a variety of unwanted background sources for these experiments. The sensitivity of the experiments continues to drive the necessary levels of detection ever lower as well. This requirement for greater sensitivity has rendered direct radioassay impractical in many cases requiring large quantities of material, frequently many kilograms, and prolonged counting times, often months. Other assay techniques have been employed such as Neutron Activation Analysis but this requires access to expensive facilities and instrumentation andmore » can be further complicated and delayed by the formation of unwanted radionuclides. Inductively Coupled Plasma Mass Spectrometry (ICP-MS) is a useful tool and recent advancements have increased the sensitivity particularly in the elemental high mass range of U and Th. Unlike direct radioassay, ICP-MS is a destructive technique since it requires the sample to be in liquid form which is aspirated into a high temperature plasma. But it benefits in that it usually requires a very small sample, typically about a gram. Here we will discuss how a variety of low background materials such as copper, polymers, and fused silica are made amenable to ICP-MS assay and how the arduous task of maintaining low backgrounds of U and Th is achieved.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koh, Chung-Yan; Piccini, Matthew Ernest; Schaff, Ulrich Y.
Multiple cases of attempted bioterrorism events using biotoxins have highlighted the urgent need for tools capable of rapid screening of suspect samples in the field (e.g., mailroom and public events). We present a portable microfluidic device capable of analyzing environmental (e.g., white powder), food (e.g., milk) and clinical (e.g., blood) samples for multiplexed detection of biotoxins. The device is rapid (<15-30 min sample-to-answer), sensitive (< 0.08 pg/mL detection limit for botulinum toxin), multiplexed (up to 64 parallel assays) and capable of analyzing small volume samples (< 20 μL total sample input). The immunoassay approach (SpinDx) is based on binding ofmore » toxins in a sample to antibody-laden capture particles followed by sedimentation of particles through a density-media in a microfluidic disk and quantification using a laser-induced fluorescence detector. A direct, blinded comparison with a gold standard ELISA revealed a 5-fold more sensitive detection limit for botulinum toxin while requiring 250-fold less sample volume and a 30 minute assay time with a near unity correlation. A key advantage of the technique is its compatibility with a variety of sample matrices with no additional sample preparation required. Ultrasensitive quantification has been demonstrated from direct analysis of multiple clinical, environmental and food samples, including white powder, whole blood, saliva, salad dressing, whole milk, peanut butter, half and half, honey, and canned meat. We believe that this device can met an urgent need in screening both potentially exposed people as well as suspicious samples in mail-rooms, airports, public sporting venues and emergency rooms. The general-purpose immunodiagnostics device can also find applications in screening of infectious and systemic diseases or serve as a lab device for conducting rapid immunoassays.« less
Theory of using magnetic deflections to combine charged particle beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steckbeck, Mackenzie K.; Doyle, Barney Lee
2014-09-01
Several radiation effects projects in the Ion Beam Lab (IBL) have recently required two disparate charged particle beams to simultaneously strike a single sample through a single port of the target chamber. Because these beams have vastly different mass–energy products (MEP), the low-MEP beam requires a large angle of deflection toward the sample by a bending electromagnet. A second electromagnet located further upstream provides a means to compensate for the small angle deflection experienced by the high-MEP beam during its path through the bending magnet. This paper derives the equations used to select the magnetic fields required by these twomore » magnets to achieve uniting both beams at the target sample. A simple result was obtained when the separation of the two magnets was equivalent to the distance from the bending magnet to the sample, and the equation is given by: B s= 1/2(r c/r s) B c, where B s and B c are the magnetic fields in the steering and bending magnet and r c/r s is the ratio of the radii of the bending magnet to that of the steering magnet. This result is not dependent upon the parameters of the high MEP beam, i.e. energy, mass, charge state. Therefore, once the field of the bending magnet is set for the low-MEP beam, and the field in the steering magnet is set as indicted in the equation, the trajectory path of any high-MEP beam will be directed into the sample.« less
NASA Astrophysics Data System (ADS)
Palmer, M. R.; Arata, C.; Huang, K.
2014-12-01
Nitrous oxide (N2O) gas is among the major contributors to global warming and ozone depletion in stratosphere. Quantitative estimate of N2O production in various pathways and N2O fluxes across different reservoirs is the key to understanding the role of N2O in the global change. To achieve this goal, accurate and concurrent measurement of both N2O concentration ([N2O]) and its site-specific isotopic composition (SP-δ15N), namely δ15Nα and δ15Nβ, is desired. Recent developments in Cavity Ring-Down Spectroscopy (CRDS) have enabled high precision measurements of [N2O] and SP-δ15N of a continuous gas flow. However, many N2O samples are discrete with limited volume (< 500 ml), and/or high [N2O] (> 2 ppm), and are not suitable for direct measurements by CRDS. Here we present results of a Small Sample Isotope Module 2 (SSIM2) which is coupled to and automatically coordinated with a Picarro isotopic N2O CRDS analyzer to handle and measure high concentration and/or small volume samples. The SSIM2 requires 20 ml of sample per analysis, and transfers the sample to the CRDS for high precision measurement. When the sample injection is < 20 ml, a zero gas is optionally filled to make up the volume. We used the SSIM2 to dilute high [N2O] samples and < 20 ml samples, and tested the effect of dilution on the measured SP-δ15N. In addition, we employed and tested a newly developed double injection method for samples adequate for two 20 ml injections. After the SSIM2 and the CRDS cavity was primed with the first injection, the second injection, which has negligible dilution of the sample, can be accurately measured for both [N2O] and SP-δ15N. Results of these experiments indicate that the precision of SSIM2-CRDS is similar to that of the continuous measurements using the CRDS alone, and that dilution has minimal effect on SP-δ15N, as along as the [N2O] is > 300 ppb after dilution. Overall, the precision of SP-δ15N measured using the SSIM2 is < 0.5 ‰.
2017-01-01
Direct analysis by mass spectrometry (imaging) has become increasingly deployed in preclinical and clinical research due to its rapid and accurate readouts. However, when it comes to biomarker discovery or histopathological diagnostics, more sensitive and in-depth profiling from localized areas is required. We developed a comprehensive, fully automated online platform for high-resolution liquid extraction surface analysis (HR-LESA) followed by micro–liquid chromatography (LC) separation and a data-independent acquisition strategy for untargeted and low abundant analyte identification directly from tissue sections. Applied to tissue sections of rat pituitary, the platform demonstrated improved spatial resolution, allowing sample areas as small as 400 μm to be studied, a major advantage over conventional LESA. The platform integrates an online buffer exchange and washing step for removal of salts and other endogenous contamination that originates from local tissue extraction. Our carry over–free platform showed high reproducibility, with an interextraction variability below 30%. Another strength of the platform is the additional selectivity provided by a postsampling gas-phase ion mobility separation. This allowed distinguishing coeluted isobaric compounds without requiring additional separation time. Furthermore, we identified untargeted and low-abundance analytes, including neuropeptides deriving from the pro-opiomelanocortin precursor protein and localized a specific area of the pituitary gland (i.e., adenohypophysis) known to secrete neuropeptides and other small metabolites related to development, growth, and metabolism. This platform can thus be applied for the in-depth study of small samples of complex tissues with histologic features of ∼400 μm or more, including potential neuropeptide markers involved in many diseases such as neurodegenerative diseases, obesity, bulimia, and anorexia nervosa. PMID:28945354
2009-02-01
data was linearly fit, and the slope yielded the Seebeck coefficient. A small resis - tor was epoxied to the top of the sample, and the oppo- site end...space probes in its radioisotope thermoelectric generators (RTGs) and is of current interest to automobile manufacturers to supply additional power... resis - tivity or conductivity, thermal conductivity, and Seebeck coefficient. These required measurements are demanding, especially the thermal
Optimization of Composting for Explosives Contaminated Soil
1991-09-30
undesirable and essentially economically unfeasible for the remediation of small sites due to the large expenditures required for the mobilization and...mm, 5 micron. * Detector: UV absorbance at 250 nm. " Mobile phase: 52% methanol/48% water. " Flow rate: 1.5 mL/min. * Injection volume: 50 1&L. The...and lOx calibration standards. 4-21 57SC/2hif 12/02/91 Samples were diluted with mobile phase as necessary to bring target analytes into the
48 CFR 852.219-9 - VA Small business subcontracting plan minimum requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false VA Small business... Provisions and Clauses 852.219-9 VA Small business subcontracting plan minimum requirements. As prescribed in subpart 819.709, insert the following clause: VA Small Business Subcontracting Plan Minimum Requirements...
48 CFR 852.219-9 - VA Small business subcontracting plan minimum requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false VA Small business... Provisions and Clauses 852.219-9 VA Small business subcontracting plan minimum requirements. As prescribed in subpart 819.709, insert the following clause: VA Small Business Subcontracting Plan Minimum Requirements...
48 CFR 852.219-9 - VA Small business subcontracting plan minimum requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false VA Small business... Provisions and Clauses 852.219-9 VA Small business subcontracting plan minimum requirements. As prescribed in subpart 819.709, insert the following clause: VA Small Business Subcontracting Plan Minimum Requirements...
48 CFR 852.219-9 - VA Small business subcontracting plan minimum requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false VA Small business... Provisions and Clauses 852.219-9 VA Small business subcontracting plan minimum requirements. As prescribed in subpart 819.709, insert the following clause: VA Small Business Subcontracting Plan Minimum Requirements...
48 CFR 852.219-9 - VA Small business subcontracting plan minimum requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false VA Small business... Provisions and Clauses 852.219-9 VA Small business subcontracting plan minimum requirements. As prescribed in subpart 819.709, insert the following clause: VA Small Business Subcontracting Plan Minimum Requirements...
Spatially resolved δ13C analysis using laser ablation isotope ratio mass spectrometry
NASA Astrophysics Data System (ADS)
Moran, J.; Riha, K. M.; Nims, M. K.; Linley, T. J.; Hess, N. J.; Nico, P. S.
2014-12-01
Inherent geochemical, organic matter, and microbial heterogeneity over small spatial scales can complicate studies of carbon dynamics through soils. Stable isotope analysis has a strong history of helping track substrate turnover, delineate rhizosphere activity zones, and identifying transitions in vegetation cover, but most traditional isotope approaches are limited in spatial resolution by a combination of physical separation techniques (manual dissection) and IRMS instrument sensitivity. We coupled laser ablation sampling with isotope measurement via IRMS to enable spatially resolved analysis over solid surfaces. Once a targeted sample region is ablated the resulting particulates are entrained in a helium carrier gas and passed through a combustion reactor where carbon is converted to CO2. Cyrotrapping of the resulting CO2 enables a reduction in carrier gas flow which improves overall measurement sensitivity versus traditional, high flow sample introduction. Currently we are performing sample analysis at 50 μm resolution, require 65 ng C per analysis, and achieve measurement precision consistent with other continuous flow techniques. We will discuss applications of the laser ablation IRMS (LA-IRMS) system to microbial communities and fish ecology studies to demonstrate the merits of this technique and how similar analytical approaches can be transitioned to soil systems. Preliminary efforts at analyzing soil samples will be used to highlight strengths and limitations of the LA-IRMS approach, paying particular attention to sample preparation requirements, spatial resolution, sample analysis time, and the types of questions most conducive to analysis via LA-IRMS.
Pham-Tuan, Hai; Kaskavelis, Lefteris; Daykin, Clare A; Janssen, Hans-Gerd
2003-06-15
"Metabonomics" has in the past decade demonstrated enormous potential in furthering the understanding of, for example, disease processes, toxicological mechanisms, and biomarker discovery. The same principles can also provide a systematic and comprehensive approach to the study of food ingredient impact on consumer health. However, "metabonomic" methodology requires the development of rapid, advanced analytical tools to comprehensively profile biofluid metabolites within consumers. Until now, NMR spectroscopy has been used for this purpose almost exclusively. Chromatographic techniques and in particular HPLC, have not been exploited accordingly. The main drawbacks of chromatography are the long analysis time, instabilities in the sample fingerprint and the rigorous sample preparation required. This contribution addresses these problems in the quest to develop generic methods for high-throughput profiling using HPLC. After a careful optimization process, stable fingerprints of biofluid samples can be obtained using standard HPLC equipment. A method using a short monolithic column and a rapid gradient with a high flow-rate has been developed that allowed rapid and detailed profiling of larger numbers of urine samples. The method can be easily translated into a slow, shallow-gradient high-resolution method for identification of interesting peaks by LC-MS/NMR. A similar approach has been applied for cell culture media samples. Due to the much higher protein content of such samples non-porous polymer-based small particle columns yielded the best results. The study clearly shows that HPLC can be used in metabonomic fingerprinting studies.
Geldenhuys, G; Rohwer, E R; Naudé, Y; Forbes, P B C
2015-02-06
Concentrations of diesel particulate matter and polycyclic aromatic hydrocarbons (PAHs) in platinum mine environments are likely to be higher than in ambient air due to the use of diesel machinery in confined environments. Airborne PAHs may be present in gaseous or particle phases each of which has different human health impacts due to their ultimate fate in the body. Here we report on the simultaneous sampling of both phases of airborne PAHs for the first time in underground platinum mines in South Africa, which was made possible by employing small, portable denuder sampling devices consisting of two polydimethylsiloxane (PDMS) multi-channel traps connected in series separated by a quartz fibre filter, which only require small, battery operated portable personal sampling pumps for air sampling. Thermal desorption coupled with comprehensive gas chromatography-mass spectrometry (TD-GC×GC-TofMS) was used to analyse denuder samples taken in three different platinum mines. The samples from a range of underground environments revealed that PAHs were predominantly found in the gas phase with naphthalene and mono-methylated naphthalene derivatives being detected at the highest concentrations ranging from 0.01 to 18 μg m(-3). The particle bound PAHs were found in the highest concentrations at the idling load haul dump vehicle exhausts with a dominance of fluoranthene and pyrene. Particle associated PAH concentrations ranged from 0.47 to 260 ng m(-3) and included benzo[k]fluoranthene, benzo[a]pyrene, indeno[1,2,3-cd]pyrene and benzo[ghi]perylene. This work highlights the need to characterise both phases in order to assess occupational exposure to PAHs in this challenging sampling environment. Copyright © 2015 Elsevier B.V. All rights reserved.
A Maximum Entropy Test for Evaluating Higher-Order Correlations in Spike Counts
Onken, Arno; Dragoi, Valentin; Obermayer, Klaus
2012-01-01
Evaluating the importance of higher-order correlations of neural spike counts has been notoriously hard. A large number of samples are typically required in order to estimate higher-order correlations and resulting information theoretic quantities. In typical electrophysiology data sets with many experimental conditions, however, the number of samples in each condition is rather small. Here we describe a method that allows to quantify evidence for higher-order correlations in exactly these cases. We construct a family of reference distributions: maximum entropy distributions, which are constrained only by marginals and by linear correlations as quantified by the Pearson correlation coefficient. We devise a Monte Carlo goodness-of-fit test, which tests - for a given divergence measure of interest - whether the experimental data lead to the rejection of the null hypothesis that it was generated by one of the reference distributions. Applying our test to artificial data shows that the effects of higher-order correlations on these divergence measures can be detected even when the number of samples is small. Subsequently, we apply our method to spike count data which were recorded with multielectrode arrays from the primary visual cortex of anesthetized cat during an adaptation experiment. Using mutual information as a divergence measure we find that there are spike count bin sizes at which the maximum entropy hypothesis can be rejected for a substantial number of neuronal pairs. These results demonstrate that higher-order correlations can matter when estimating information theoretic quantities in V1. They also show that our test is able to detect their presence in typical in-vivo data sets, where the number of samples is too small to estimate higher-order correlations directly. PMID:22685392
Messner, Michael J; Berger, Philip; Javier, Julie
2017-06-01
Public water systems (PWSs) in the United States generate total coliform (TC) and Escherichia coli (EC) monitoring data, as required by the Total Coliform Rule (TCR). We analyzed data generated in 2011 by approximately 38,000 small (serving fewer than 4101 individuals) undisinfected public water systems (PWSs). We used statistical modeling to characterize a distribution of TC detection probabilities for each of nine groupings of PWSs based on system type (community, non-transient non-community, and transient non-community) and population served (less than 101, 101-1000 and 1001-4100 people). We found that among PWS types sampled in 2011, on average, undisinfected transient PWSs test positive for TC 4.3% of the time as compared with 3% for undisinfected non-transient PWSs and 2.5% for undisinfected community PWSs. Within each type of PWS, the smaller systems have higher median TC detection than the larger systems. All TC-positive samples were assayed for EC. Among TC-positive samples from small undisinfected PWSs, EC is detected in about 5% of samples, regardless of PWS type or size. We evaluated the upper tail of the TC detection probability distributions and found that significant percentages of some system types have high TC detection probabilities. For example, assuming the systems providing data are nationally-representative, then 5.0% of the ∼50,000 small undisinfected transient PWSs in the U.S. have TC detection probabilities of 20% or more. Communities with such high TC detection probabilities may have elevated risk of acute gastrointestinal (AGI) illness - perhaps as great or greater than the attributable risk to drinking water (6-22%) calculated for 14 Wisconsin community PWSs with much lower TC detection probabilities (about 2.3%, Borchardt et al., 2012). Published by Elsevier GmbH.
Code of Federal Regulations, 2010 CFR
2010-07-01
... as a small refiner for the gasoline benzene requirements of this subpart? 80.1338 Section 80.1338... FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner Provisions § 80.1338 What criteria must be met to qualify as a small refiner for the gasoline benzene requirements of this subpart? (a) A small...
Code of Federal Regulations, 2013 CFR
2013-07-01
... as a small refiner for the gasoline benzene requirements of this subpart? 80.1338 Section 80.1338... FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner Provisions § 80.1338 What criteria must be met to qualify as a small refiner for the gasoline benzene requirements of this subpart? (a) A small...
Code of Federal Regulations, 2012 CFR
2012-07-01
... as a small refiner for the gasoline benzene requirements of this subpart? 80.1338 Section 80.1338... FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner Provisions § 80.1338 What criteria must be met to qualify as a small refiner for the gasoline benzene requirements of this subpart? (a) A small...
Code of Federal Regulations, 2011 CFR
2011-07-01
... as a small refiner for the gasoline benzene requirements of this subpart? 80.1338 Section 80.1338... FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner Provisions § 80.1338 What criteria must be met to qualify as a small refiner for the gasoline benzene requirements of this subpart? (a) A small...
Code of Federal Regulations, 2014 CFR
2014-07-01
... as a small refiner for the gasoline benzene requirements of this subpart? 80.1338 Section 80.1338... FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner Provisions § 80.1338 What criteria must be met to qualify as a small refiner for the gasoline benzene requirements of this subpart? (a) A small...
Inductive Non-Contact Position Sensor
NASA Technical Reports Server (NTRS)
Youngquist, Robert; Garcia, Alyssa; Simmons, Stephen
2010-01-01
Optical hardware has been developed to measure the depth of defects in the Space Shuttle Orbiter's windows. In this hardware, a mirror is translated such that its position corresponds to the defect's depth, so the depth measurement problem is transferred to a mirror-position measurement problem. This is preferable because the mirror is internal to the optical system and thus accessible. Based on requirements supplied by the window inspectors, the depth of the defects needs to be measured over a range of 200 microns with a resolution of about 100 nm and an accuracy of about 400 nm. These same requirements then apply to measuring the position of the mirror, and in addition, since this is a scanning system, a response time of about 10 ms is needed. A market search was conducted and no sensor that met these requirements that also fit into the available housing volume (less than one cubic inch) was found, so a novel sensor configuration was constructed to meet the requirements. This new sensor generates a nearly linearly varying magnetic field over a small region of space, which can easily be sampled, resulting in a voltage proportional to position. Experiments were done with a range of inductor values, drive voltages, drive frequencies, and inductor shapes. A rough mathematical model was developed for the device that, in most aspects, describes how it operates and what electrical parameters should be chosen for best performance. The final configuration met all the requirements, yielding a small rugged sensor that was easy to use and had nanometer resolution over more than the 200-micron range required. The inductive position sensor is a compact device (potentially as small as 2 cubic centimeters), which offers nanometer-position resolution over a demonstrated range of nearly 1 mm. One of its advantages is the simplicity of its electrical design. Also, the sensor resolution is nearly uniform across its operational range, which is in contrast to eddy current and capacitive sensors whose sensitivity is dependent upon position.
Big assumptions for small samples in crop insurance
Ashley Elaine Hungerford; Barry Goodwin
2014-01-01
The purpose of this paper is to investigate the effects of crop insurance premiums being determined by small samples of yields that are spatially correlated. If spatial autocorrelation and small sample size are not properly accounted for in premium ratings, the premium rates may inaccurately reflect the risk of a loss.
Standard Deviation for Small Samples
ERIC Educational Resources Information Center
Joarder, Anwar H.; Latif, Raja M.
2006-01-01
Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…
Challenging Conventional Wisdom for Multivariate Statistical Models with Small Samples
ERIC Educational Resources Information Center
McNeish, Daniel
2017-01-01
In education research, small samples are common because of financial limitations, logistical challenges, or exploratory studies. With small samples, statistical principles on which researchers rely do not hold, leading to trust issues with model estimates and possible replication issues when scaling up. Researchers are generally aware of such…
Statistical Analysis Techniques for Small Sample Sizes
NASA Technical Reports Server (NTRS)
Navard, S. E.
1984-01-01
The small sample sizes problem which is encountered when dealing with analysis of space-flight data is examined. Because of such a amount of data available, careful analyses are essential to extract the maximum amount of information with acceptable accuracy. Statistical analysis of small samples is described. The background material necessary for understanding statistical hypothesis testing is outlined and the various tests which can be done on small samples are explained. Emphasis is on the underlying assumptions of each test and on considerations needed to choose the most appropriate test for a given type of analysis.
NASA Astrophysics Data System (ADS)
Moran, J.; Kelly, J.; Sams, R.; Newburn, M.; Kreuzer, H.; Alexander, M.
2011-12-01
Quick incorporation of IR spectroscopy based isotope measurements into cutting edge research in biogeochemical cycling attests to the advantages of a spectroscopy versus mass spectrometry method for making some 13C measurements. The simple principles of optical spectroscopy allow field portability and provide a more robust general platform for isotope measurements. We present results with a new capillary absorption spectrometer (CAS) with the capability of reducing the sample size required for high precision isotopic measurements to the picomolar level and potentially the sub-picomolar level. This work was motivated by the minute sample size requirements for laser ablation isotopic studies of carbon cycling in microbial communities but has potential to be a valuable tool in other areas of biological and geological research. The CAS instrument utilizes a capillary waveguide as a sample chamber for interrogating CO2 via near IR laser absorption spectroscopy. The capillary's small volume (~ 0.5 mL) combined with propagation and interaction of the laser mode with the entire sample reduces sample size requirements to a fraction of that accessible with commercially available IR absorption including those with multi-pass or ring-down cavity systems. Using a continuous quantum cascade laser system to probe nearly adjacent rovibrational transitions of different isotopologues of CO2 near 2307 cm-1 permits sample measurement at low analyte pressures (as low as 2 Torr) for further sensitivity improvement. A novel method to reduce cw-fringing noise in the hollow waveguide is presented, which allows weak absorbance features to be studied at the few ppm level after averaging 1,000 scans in 10 seconds. Detection limits down to the 20 picomoles have been observed, a concentration of approximately 400 ppm at 2 Torr in the waveguide with precision and accuracy at or better than 1 %. Improvements in detection and signal averaging electronics and laser power and mode quality are anticipated to reduce the required samples size to a 100-200 femtomoles of carbon. We report the application of the CAS system to a Laser Ablation-Catalytic-Combustion (LA-CC) micro-sampler system for selectively harvesting detailed sections of a solid surface for 13C analysis. This technique results in a three order of magnitude sensitivity improvement reported for our isotope measurement system compared to typical IRMS, providing new opportunities for making detailed investigations into wide ranges of microbial, physical, and chemical systems. The CAS is interfaced directly to the LA CC system currently operating at a 50 μm spatial resolution. We demonstrate that particulates produced by a Nd:YAG laser (λ=266nm) are isotopically homogenous with the parent material as measured by both IRMS and the CAS system. An improved laser ablation system operating at 193 nm with a spatial resolution of 2 microns or better is under development which will demonstrate the utility of the CAS system for sample sizes too low for IRMS. The improved sensitivities and optimized spatial targeting of such a system could interrogate targets as detailed as small cell clusters or intergrain organic deposits and could enhance ability to track biogeochemical carbon cycling.
Maybe Small Is Too Small a Term: Introduction to Advancing Small Sample Prevention Science.
Fok, Carlotta Ching Ting; Henry, David; Allen, James
2015-10-01
Prevention research addressing health disparities often involves work with small population groups experiencing such disparities. The goals of this special section are to (1) address the question of what constitutes a small sample; (2) identify some of the key research design and analytic issues that arise in prevention research with small samples; (3) develop applied, problem-oriented, and methodologically innovative solutions to these design and analytic issues; and (4) evaluate the potential role of these innovative solutions in describing phenomena, testing theory, and evaluating interventions in prevention research. Through these efforts, we hope to promote broader application of these methodological innovations. We also seek whenever possible, to explore their implications in more general problems that appear in research with small samples but concern all areas of prevention research. This special section includes two sections. The first section aims to provide input for researchers at the design phase, while the second focuses on analysis. Each article describes an innovative solution to one or more challenges posed by the analysis of small samples, with special emphasis on testing for intervention effects in prevention research. A concluding article summarizes some of their broader implications, along with conclusions regarding future directions in research with small samples in prevention science. Finally, a commentary provides the perspective of the federal agencies that sponsored the conference that gave rise to this special section.
Automatic rock detection for in situ spectroscopy applications on Mars
NASA Astrophysics Data System (ADS)
Mahapatra, Pooja; Foing, Bernard H.
A novel algorithm for rock detection has been developed for effectively utilising Mars rovers, and enabling autonomous selection of target rocks that require close-contact spectroscopic measurements. The algorithm demarcates small rocks in terrain images as seen by cameras on a Mars rover during traverse. This information may be used by the rover for selection of geologically relevant sample rocks, and (in conjunction with a rangefinder) to pick up target samples using a robotic arm for automatic in situ determination of rock composition and mineralogy using, for example, a Raman spectrometer. Determining rock samples within the region that are of specific interest without physically approaching them significantly reduces time, power and risk. Input images in colour are converted to greyscale for intensity analysis. Bilateral filtering is used for texture removal while preserving rock boundaries. Unsharp masking is used for contrast enhance-ment. Sharp contrasts in intensities are detected using Canny edge detection, with thresholds that are calculated from the image obtained after contrast-limited adaptive histogram equalisation of the unsharp masked image. Scale-space representations are then generated by convolving this image with a Gaussian kernel. A scale-invariant blob detector (Laplacian of the Gaussian, LoG) detects blobs independently of their sizes, and therefore requires a multi-scale approach with automatic scale se-lection. The scale-space blob detector consists of convolution of the Canny edge-detected image with a scale-normalised LoG at several scales, and finding the maxima of squared LoG response in scale-space. After the extraction of local intensity extrema, the intensity profiles along rays going out of the local extremum are investigated. An ellipse is fitted to the region determined by significant changes in the intensity profiles. The fitted ellipses are overlaid on the original Mars terrain image for a visual estimation of the rock detection accuracy, and the number of ellipses are counted. Since geometry and illumination have the least effect on small rocks, the proposed algorithm is effective in detecting small rocks (or bigger rocks at larger distances from the camera) that consist of a small fraction of image pixels. Acknowledgements: The first author would like to express her gratitude to the European Space Agency (ESA/ESTEC) and the International Lunar Exploration Working Group (ILEWG) for their support of this work.
Micro-crystallography comes of age.
Smith, Janet L; Fischetti, Robert F; Yamamoto, Masaki
2012-10-01
The latest revolution in macromolecular crystallography was incited by the development of dedicated, user friendly, micro-crystallography beam lines. Brilliant X-ray beams of diameter 20 μm or less, now available at most synchrotron sources, enable structure determination from samples that previously were inaccessible. Relative to traditional crystallography, crystals with one or more small dimensions have diffraction patterns with vastly improved signal-to-noise when recorded with an appropriately matched beam size. Structures can be solved from isolated, well diffracting regions within inhomogeneous samples. This review summarizes the technological requirements and approaches to producing micro-beams and how they continue to change the practice of crystallography. Copyright © 2012 Elsevier Ltd. All rights reserved.
Uncooled emissive infrared imagers for CubeSats
NASA Astrophysics Data System (ADS)
Puschell, Jeffery J.; Masini, Paolo
2014-09-01
Raytheon's fourth generation uncooled microbolometer array technology with digital output, High Definition (HD) 1920 × 1200 format and 12 μm cell size enables uncooled thermal infrared (TIR) multispectral imagers with the sensitivity and spatial sampling needed for a variety of Earth observation missions in LEO, GEO and HEO. A powerful combination of small detector cell size, fast optics and high sensitivity achieved without cryogenic cooling leads to instruments that are much smaller than current TIR systems, while still offering the capability to meet challenging measurement requirements for Earth observation missions. To consider how this technology could be implemented for Earth observation missions, we extend our previous studies with visible wavelength CubeSat imagers for environmental observations from LEO and examine whether small thermal infrared imagers based on fourth generation uncooled technology could be made small enough to fit onboard a 3U CubeSat and still meet challenging requirements for legacy missions. We found that moderate spatial resolution (~200 m) high sensitivity cloud and surface temperature observations meeting legacy MODIS/VIIRS requirements could be collected successfully with CubeSat-sized imagers but that multiple imagers are needed to cover the full swath for these missions. Higher spatial resolution land imagers are more challenging to fit into the CubeSat form factor, but it may be possible to do so for systems that require roughly 100 m spatial resolution. Regardless of whether it can fit into a CubeSat or not, uncooled land imagers meeting candidate TIR requirements can be implemented with a much smaller instrument than previous imagers. Even though this technology appears to be very promising, more work is needed to qualify this newly available uncooled infrared technology for use in space. If these new devices prove to be as space worthy as the first generation arrays that Raytheon qualified and built into the THEMIS imager still operating successfully onboard Mars Odyssey 2001, new classes of low cost, uncooled TIR Earth instruments will be enabled that are suitable for use as primary and hosted payloads in LEO, GEO and HEO or in constellations of small satellites as small as CubeSats to support Earth science measurement objectives in weather forecasting, land imaging and climate variability and change.
Fram, Miranda S.; Maurer, Douglas K.; Lico, Michael S.
2005-01-01
Increased pumpage from a basalt aquifer near Fallon, Nevada, has caused its water levels to decline and has induced changes in the quality of water pumped from the basalt. The aquifer is the sole source of water for municipal supply to the city of Fallon, the Naval Air Station Fallon, and the Fallon Paiute-Shoshone Tribe. These changes may be mitigated by storage of surface water in the basalt for subsequent use. Because chlorination of the surface water may be required for storage, the U.S. Geological Survey, in cooperation with the Fallon Paiute-Shoshone Tribe, made laboratory tests using laboratory carbon-organic-free water, surface-water, ground-water, and basaltic-rock samples to determine the potential for formation of disinfection by-products. Experiments with water samples only (no rock and no chlorine) indicated no change in dissolved-organic-carbon (DOC) concentrations over a 20-day reaction period; whereas, all experiments using rock, water, and no chlorine indicated an increase in DOC concentrations. The greatest increase in DOC concentrations for all three water samples occurred in experiments with the rock samples from outcrops on Rattlesnake Hill. Experiments with water only and chlorine yielded a total trihalomethane (THM) concentration of 97.4 ?g/L for the ground-water sample and 347 ?g/L for the surface-water sample. Experiments with mixtures of water, rocks, and chlorine indicated that reactions with the rock consumed chlorine and released significant amounts of organic carbon from the rock, increasing the DOC concentration in the water. The organic carbon in the rocks likely is associated with the secondary clay minerals that line vesicles and fractures in the rocks. THM concentrations were greatest, from 335 to 909 ?g/L, for surface water equilibrated with rock samples from Rattlesnake Hill. However, the concentration of chlorine required to produce these high THM concentrations ranged from 18 to 84 mg/L. The results of the experiments suggest that the amount of organic carbon released from the rocks during successive cycles of recharge, storage, and recovery of chlorinated surface water may be relatively small. The chlorine demand of the rocks is so large that all of the free chlorine in the entire volume of recharged water likely would be consumed by only a very small volume of the aquifer surrounding an injection well, or beneath an infiltration bed. The majority of the volume of the aquifer filled by the stored water likely would never come in contact with free chlorine, and the increases in concentration of DOC observed in these experiments likely would occur in a very small volume of the stored water. For this reason, increases in concentration of THMs for the entire volume of water stored also likely would be considerably less than those measured in these experiments. To test this hypothesis, additional laboratory experiments using varying levels of chlorination, varying lengths of reaction periods, and repeated cycles of chlorination would be useful. A field experiment made at a small scale in an isolated part of the basalt aquifer would aid in the design of an operational system.
Monitoring, Modeling, and Diagnosis of Alkali-Silica Reaction in Small Concrete Samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Vivek; Cai, Guowei; Gribok, Andrei V.
Assessment and management of aging concrete structures in nuclear power plants require a more systematic approach than simple reliance on existing code margins of safety. Structural health monitoring of concrete structures aims to understand the current health condition of a structure based on heterogeneous measurements to produce high-confidence actionable information regarding structural integrity that supports operational and maintenance decisions. This report describes alkali-silica reaction (ASR) degradation mechanisms and factors influencing the ASR. A fully coupled thermo-hydro-mechanical-chemical model developed by Saouma and Perotti by taking into consideration the effects of stress on the reaction kinetics and anisotropic volumetric expansion is presentedmore » in this report. This model is implemented in the GRIZZLY code based on the Multiphysics Object Oriented Simulation Environment. The implemented model in the GRIZZLY code is randomly used to initiate ASR in a 2D and 3D lattice to study the percolation aspects of concrete. The percolation aspects help determine the transport properties of the material and therefore the durability and service life of concrete. This report summarizes the effort to develop small-size concrete samples with embedded glass to mimic ASR. The concrete samples were treated in water and sodium hydroxide solution at elevated temperature to study how ingress of sodium ions and hydroxide ions at elevated temperature impacts concrete samples embedded with glass. Thermal camera was used to monitor the changes in the concrete sample and results are summarized.« less
A machine for haemodialysing very small infants.
Everdell, Nicholas L; Coulthard, Malcolm G; Crosier, Jean; Keir, Michael J
2005-05-01
Babies weighing under 6 kg are difficult to dialyse, especially those as small as 1 kg. Peritoneal dialysis is easier than haemodialysis, but is not always possible, and clears molecules less efficiently. Two factors complicate haemodialysis. First, extracorporeal circuits are large relative to a baby's blood volume, necessitating priming with fresh or modified blood. Second, blood flow from infants' access vessels is disproportionately low (Poiseuille's law), causing inadequate dialysis, or clotting within the circuit. These problems are minimised by using single lumen access, a very small circuit, and a reservoir syringe to separate the sampling and dialyser blood flow rates. Its manual operation is tedious, so we developed a computer-controlled, pressure-monitored machine to run it, including adjusting the blood withdrawal rate from poorly sampling lines. We have dialysed four babies weighing 0.8-3.4 kg, with renal failure or metabolic disorders. The circuits did not require priming. Clearances of creatinine, urea, potassium, phosphate and ammonia were mean (SD) 0.54 (0.22) ml/min using one dialyser, and 0.98 (0.22) ml/min using two in parallel. Ammonia clearance in a 2.4 kg baby had a 9 h half-life. Ultrafiltration up to 45 ml/h was achieved easily. This device provided infants with immediate, effective and convenient haemodialysis, typically delivered for prolonged periods.
[Sample preparation and bioanalysis in mass spectrometry].
Bourgogne, Emmanuel; Wagner, Michel
2015-01-01
The quantitative analysis of compounds of clinical interest of low molecular weight (<1000 Da) in biological fluids is currently in most cases performed by liquid chromatography-mass spectrometry (LC-MS). Analysis of these compounds in biological fluids (plasma, urine, saliva, hair...) is a difficult task requiring a sample preparation. Sample preparation is a crucial part of chemical/biological analysis and in a sense is considered the bottleneck of the whole analytical process. The main objectives of sample preparation are the removal of potential interferences, analyte preconcentration, and converting (if needed) the analyte into a more suitable form for detection or separation. Without chromatographic separation, endogenous compounds, co-eluted products may affect a quantitative method in mass spectrometry performance. This work focuses on three distinct parts. First, quantitative bioanalysis will be defined, different matrices and sample preparation techniques currently used in bioanalysis by mass spectrometry of/for small molecules of clinical interest in biological fluids. In a second step the goals of sample preparation will be described. Finally, in a third step, sample preparation strategies will be made either directly ("dilute and shoot") or after precipitation.
NASA Astrophysics Data System (ADS)
Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin
2017-06-01
Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.
Cao, Yupin; Deng, Biyang; Yan, Lizhen; Huang, Hongli
2017-05-15
An environmentally friendly and highly efficient gas pressure-assisted sample introduction system (GPASIS) was developed for inductively-coupled plasma mass spectrometry. A GPASIS consisting of a gas-pressure control device, a customized nebulizer, and a custom-made spray chamber was fabricated. The advantages of this GPASIS derive from its high nebulization efficiencies, small sample volume requirements, low memory effects, good precision, and zero waste emission. A GPASIS can continuously, and stably, nebulize 10% NaCl solution for more than an hour without clogging. Sensitivity, detection limits, precision, long-term stability, double charge and oxide ion levels, nebulization efficiencies, and matrix effects of the sample introduction system were evaluated. Experimental results indicated that the performance of this GPASIS, was equivalent to, or better than, those obtained by conventional sample introduction systems. This GPASIS was successfully used to determine Cd and Pb by ICP-MS in human plasma. Copyright © 2017 Elsevier B.V. All rights reserved.
Risk-Based Sampling: I Don't Want to Weight in Vain.
Powell, Mark R
2015-12-01
Recently, there has been considerable interest in developing risk-based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk-based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean-variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false "optimal" portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out-of-sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk-based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers. © 2015 Society for Risk Analysis.
X-linked dominant protoporphyria: The first reported Japanese case.
Ninomiya, Yukiko; Kokunai, Yasuhito; Tanizaki, Hideaki; Akasaka, Eijiro; Nakano, Hajime; Moriwaki, Shinichi
2016-04-01
A 12-year-old boy with photosensitivity since 3 years of age presented with small concavities on both cheeks, the nasal root and the dorsal surface of both hands. According to the clinical features, erythropoietic protoporphyria (EPP) was suspected. Urine and blood samples were tested for porphyrin derivatives, which revealed a markedly elevated level of erythrocyte protoporphyrin (EP) and a diagnosis of EPP was made. The patient's mother had no photosensitivity, however, lesions appearing slightly as small scars were found on the dorsum of her right hand; his elder sister and father showed no rash. The EP levels were elevated in samples from his mother and mildly elevated in those from his elder sister and father. To obtain a definitive diagnosis, genetic analyses were performed using samples from all family members, which revealed no mutations in the ferrochelatase-encoding gene (FECH), which is responsible for EPP. Instead, a pathological mutation of the 5-aminolevulinic acid synthase-encoding gene (ALAS2) was identified in samples from the patient, his mother and his elder sister, confirming a definitive diagnosis of X-linked dominant protoporphyria (XLDPP). This is the first Japanese family reported to have XLDPP, demonstrating evidence of the condition in Japan. In addition, because XLDPP is very similar to EPP in its clinical aspects and laboratory findings, a genetic analysis is required for the differential diagnosis. © 2015 Japanese Dermatological Association.
Refined elasticity sampling for Monte Carlo-based identification of stabilizing network patterns.
Childs, Dorothee; Grimbs, Sergio; Selbig, Joachim
2015-06-15
Structural kinetic modelling (SKM) is a framework to analyse whether a metabolic steady state remains stable under perturbation, without requiring detailed knowledge about individual rate equations. It provides a representation of the system's Jacobian matrix that depends solely on the network structure, steady state measurements, and the elasticities at the steady state. For a measured steady state, stability criteria can be derived by generating a large number of SKMs with randomly sampled elasticities and evaluating the resulting Jacobian matrices. The elasticity space can be analysed statistically in order to detect network positions that contribute significantly to the perturbation response. Here, we extend this approach by examining the kinetic feasibility of the elasticity combinations created during Monte Carlo sampling. Using a set of small example systems, we show that the majority of sampled SKMs would yield negative kinetic parameters if they were translated back into kinetic models. To overcome this problem, a simple criterion is formulated that mitigates such infeasible models. After evaluating the small example pathways, the methodology was used to study two steady states of the neuronal TCA cycle and the intrinsic mechanisms responsible for their stability or instability. The findings of the statistical elasticity analysis confirm that several elasticities are jointly coordinated to control stability and that the main source for potential instabilities are mutations in the enzyme alpha-ketoglutarate dehydrogenase. © The Author 2015. Published by Oxford University Press.
Goodwin, Richard J A; Nilsson, Anna; Borg, Daniel; Langridge-Smith, Pat R R; Harrison, David J; Mackay, C Logan; Iverson, Suzanne L; Andrén, Per E
2012-08-30
Analysis of whole animal tissue sections by MALDI MS imaging (MSI) requires effective sample collection and transfer methods to allow the highest quality of in situ analysis of small or hard to dissect tissues. We report on the use of double-sided adhesive conductive carbon tape during whole adult rat tissue sectioning of carboxymethyl cellulose (CMC) embedded animals, with samples mounted onto large format conductive glass and conductive plastic MALDI targets, enabling MSI analysis to be performed on both TOF and FT-ICR MALDI mass spectrometers. We show that mounting does not unduly affect small molecule MSI detection by analyzing tiotropium abundance and distribution in rat lung tissues, with direct on-tissue quantitation achieved. Significantly, we use the adhesive tape to provide support to embedded delicate heat-stabilized tissues, enabling sectioning and mounting to be performed that maintained tissue integrity on samples that had previously been impossible to adequately prepare section for MSI analysis. The mapping of larger peptidomic molecules was not hindered by tape mounting samples and we demonstrate this by mapping the distribution of PEP-19 in both native and heat-stabilized rat brains. Furthermore, we show that without heat stabilization PEP-19 degradation fragments can detected and identified directly by MALDI MSI analysis. Copyright © 2012 Elsevier B.V. All rights reserved.
Park, Yang-Seok; Sunkara, Vijaya; Kim, Yubin; Lee, Won Seok; Han, Ja-Ryoung; Cho, Yoon-Kyoung
2016-04-16
Enzyme-linked immunosorbent assay (ELISA) is a promising method to detect small amount of proteins in biological samples. The devices providing a platform for reduced sample volume and assay time as well as full automation are required for potential use in point-of-care-diagnostics. Recently, we have demonstrated ultrasensitive detection of serum proteins, C-reactive protein (CRP) and cardiac troponin I (cTnI), utilizing a lab-on-a-disc composed of TiO2 nanofibrous (NF) mats. It showed a large dynamic range with femto molar (fM) detection sensitivity, from a small volume of whole blood in 30 min. The device consists of several components for blood separation, metering, mixing, and washing that are automated for improved sensitivity from low sample volumes. Here, in the video demonstration, we show the experimental protocols and know-how for the fabrication of NFs as well as the disc, their integration and the operation in the following order: processes for preparing TiO2 NF mat; transfer-printing of TiO2 NF mat onto the disc; surface modification for immune-reactions, disc assembly and operation; on-disc detection and representative results for immunoassay. Use of this device enables multiplexed analysis with minimal consumption of samples and reagents. Given the advantages, the device should find use in a wide variety of applications, and prove beneficial in facilitating the analysis of low abundant proteins.
Sampling Mars: Analytical requirements and work to do in advance
NASA Technical Reports Server (NTRS)
Koeberl, Christian
1988-01-01
Sending a mission to Mars to collect samples and return them to the Earth for analysis is without doubt one of the most exciting and important tasks for planetary science in the near future. Many scientifically important questions are associated with the knowledge of the composition and structure of Martian samples. Amongst the most exciting questions is the clarification of the SNC problem- to prove or disprove a possible Martian origin of these meteorites. Since SNC meteorites have been used to infer the chemistry of the planet Mars, and its evolution (including the accretion history), it would be important to know if the whole story is true. But before addressing possible scientific results, we have to deal with the analytical requirements, and with possible pre-return work. It is unlikely to expect that a possible Mars sample return mission will bring back anything close to the amount returned by the Apollo missions. It will be more like the amount returned by the Luna missions, or at least in that order of magnitude. This requires very careful sample selection, and very precise analytical techniques. These techniques should be able to use minimal sample sizes and on the other hand optimize the scientific output. The possibility to work with extremely small samples should not obstruct another problem: possible sampling errors. As we know from terrestrial geochemical studies, sampling procedures are quite complicated and elaborate to ensure avoiding sampling errors. The significance of analyzing a milligram or submilligram sized sample and putting that in relationship with the genesis of whole planetary crusts has to be viewed with care. This leaves a dilemma on one hand, to minimize the sample size as far as possible in order to have the possibility of returning as many different samples as possible, and on the other hand to take a sample large enough to be representative. Whole rock samples are very useful, but should not exceed the 20 to 50 g range, except in cases of extreme inhomogeneity, because for larger samples the information tends to become redundant. Soil samples should be in the 2 to 10 g range, permitting the splitting of the returned samples for studies in different laboratories with variety of techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fong, Erika J.; Huang, Chao; Hamilton, Julie
Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection,more » system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.« less
Kraemer, D; Chen, G
2014-02-01
Accurate measurements of thermal conductivity are of great importance for materials research and development. Steady-state methods determine thermal conductivity directly from the proportionality between heat flow and an applied temperature difference (Fourier Law). Although theoretically simple, in practice, achieving high accuracies with steady-state methods is challenging and requires rather complex experimental setups due to temperature sensor uncertainties and parasitic heat loss. We developed a simple differential steady-state method in which the sample is mounted between an electric heater and a temperature-controlled heat sink. Our method calibrates for parasitic heat losses from the electric heater during the measurement by maintaining a constant heater temperature close to the environmental temperature while varying the heat sink temperature. This enables a large signal-to-noise ratio which permits accurate measurements of samples with small thermal conductance values without an additional heater calibration measurement or sophisticated heater guards to eliminate parasitic heater losses. Additionally, the differential nature of the method largely eliminates the uncertainties of the temperature sensors, permitting measurements with small temperature differences, which is advantageous for samples with high thermal conductance values and/or with strongly temperature-dependent thermal conductivities. In order to accelerate measurements of more than one sample, the proposed method allows for measuring several samples consecutively at each temperature measurement point without adding significant error. We demonstrate the method by performing thermal conductivity measurements on commercial bulk thermoelectric Bi2Te3 samples in the temperature range of 30-150 °C with an error below 3%.
A new small-angle X-ray scattering set-up on the crystallography beamline I711 at MAX-lab.
Knaapila, M; Svensson, C; Barauskas, J; Zackrisson, M; Nielsen, S S; Toft, K N; Vestergaard, B; Arleth, L; Olsson, U; Pedersen, J S; Cerenius, Y
2009-07-01
A small-angle X-ray scattering (SAXS) set-up has recently been developed at beamline I711 at the MAX II storage ring in Lund (Sweden). An overview of the required modifications is presented here together with a number of application examples. The accessible q range in a SAXS experiment is 0.009-0.3 A(-1) for the standard set-up but depends on the sample-to-detector distance, detector offset, beamstop size and wavelength. The SAXS camera has been designed to have a low background and has three collinear slit sets for collimating the incident beam. The standard beam size is about 0.37 mm x 0.37 mm (full width at half-maximum) at the sample position, with a flux of 4 x 10(10) photons s(-1) and lambda = 1.1 A. The vacuum is of the order of 0.05 mbar in the unbroken beam path from the first slits until the exit window in front of the detector. A large sample chamber with a number of lead-throughs allows different sample environments to be mounted. This station is used for measurements on weakly scattering proteins in solutions and also for colloids, polymers and other nanoscale structures. A special application supported by the beamline is the effort to establish a micro-fluidic sample environment for structural analysis of samples that are only available in limited quantities. Overall, this work demonstrates how a cost-effective SAXS station can be constructed on a multipurpose beamline.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walls, P
Sixteen of the twenty-one samples have been investigated using the scanning laser dilatometer. This includes all three types of samples with different preparation routes and organic content. Cracks were observed in all samples, even those only heated to 300 C. It was concluded that the cracking was occurring in the early part of the heat treatment before the samples reached 300 C. Increase in the rate of dilation of the samples occurred above 170 C which coincided with the decomposition of the binder/wax additives as determined by differential thermal analysis. A comparison was made with SYNROC C material (Powder Runmore » 143), samples of which had been CIPed and green machined to a similar diameter and thickness as the 089mm SRTC pucks. These samples contained neither binder nor other organic processing aids and had been kept in the same desiccator as the SRTC samples. The CIPed Synroc C samples sintered to high density with zero cracks. As the cracks made up only a small contribution to the change in diameter of the sample compared to the sintering shrinkage, useful information could still be gained from the runs. The sintering curves showed that there was much greater shrinkage of the Type III samples containing only the 5% PEG binder compared to the Type I which contained polyolefin wax as processing aid. Slight changes in gradient of the sintering curve were observed, however, due to the masking effect of the cracking, full analysis of the sintering kinetics cannot be conducted. Even heating the samples to 300 C at 1.0 or 0.5 C/min could not prevent crack formation. This indicated that heating rate was not the critical parameter causing cracking of the samples. Sectioning of green bodies revealed the inhomogeneous nature of the binder/lubricant distribution in the samples. Increased homogeneity would reduce the amount of binder/lubricant required, which should in turn, reduce the degree of cracking observed during heating to the binder burnout temperature. A combination of: (1) use of a higher forming pressure, (2) reduction of organics content, (3) improvement in the distribution of the organic wax and binder components throughout the green body, could possibly alleviate cracking. Ultrasonic emulsification of the binder and wax with a small quantity of water prior to adding to the ball or attrition mill is advised to ensure more even distribution of the wax/binder system. This would also reduce the proportion of organic additives required. The binder burnout stage of the operation must first be optimized (i.e. production of pucks with no cracks) prior to optimization of the sintering stage.« less
Spatial Sampling of Weather Data for Regional Crop Yield Simulations
NASA Technical Reports Server (NTRS)
Van Bussel, Lenny G. J.; Ewert, Frank; Zhao, Gang; Hoffmann, Holger; Enders, Andreas; Wallach, Daniel; Asseng, Senthold; Baigorria, Guillermo A.; Basso, Bruno; Biernath, Christian;
2016-01-01
Field-scale crop models are increasingly applied at spatio-temporal scales that range from regions to the globe and from decades up to 100 years. Sufficiently detailed data to capture the prevailing spatio-temporal heterogeneity in weather, soil, and management conditions as needed by crop models are rarely available. Effective sampling may overcome the problem of missing data but has rarely been investigated. In this study the effect of sampling weather data has been evaluated for simulating yields of winter wheat in a region in Germany over a 30-year period (1982-2011) using 12 process-based crop models. A stratified sampling was applied to compare the effect of different sizes of spatially sampled weather data (10, 30, 50, 100, 500, 1000 and full coverage of 34,078 sampling points) on simulated wheat yields. Stratified sampling was further compared with random sampling. Possible interactions between sample size and crop model were evaluated. The results showed differences in simulated yields among crop models but all models reproduced well the pattern of the stratification. Importantly, the regional mean of simulated yields based on full coverage could already be reproduced by a small sample of 10 points. This was also true for reproducing the temporal variability in simulated yields but more sampling points (about 100) were required to accurately reproduce spatial yield variability. The number of sampling points can be smaller when a stratified sampling is applied as compared to a random sampling. However, differences between crop models were observed including some interaction between the effect of sampling on simulated yields and the model used. We concluded that stratified sampling can considerably reduce the number of required simulations. But, differences between crop models must be considered as the choice for a specific model can have larger effects on simulated yields than the sampling strategy. Assessing the impact of sampling soil and crop management data for regional simulations of crop yields is still needed.
Dynamic measurements of CO diffusing capacity using discrete samples of alveolar gas.
Graham, B L; Mink, J T; Cotton, D J
1983-01-01
It has been shown that measurements of the diffusing capacity of the lung for CO made during a slow exhalation [DLCO(exhaled)] yield information about the distribution of the diffusing capacity in the lung that is not available from the commonly measured single-breath diffusing capacity [DLCO(SB)]. Current techniques of measuring DLCO(exhaled) require the use of a rapid-responding (less than 240 ms, 10-90%) CO meter to measure the CO concentration in the exhaled gas continuously during exhalation. DLCO(exhaled) is then calculated using two sample points in the CO signal. Because DLCO(exhaled) calculations are highly affected by small amounts of noise in the CO signal, filtering techniques have been used to reduce noise. However, these techniques reduce the response time of the system and may introduce other errors into the signal. We have developed an alternate technique in which DLCO(exhaled) can be calculated using the concentration of CO in large discrete samples of the exhaled gas, thus eliminating the requirement of a rapid response time in the CO analyzer. We show theoretically that this method is as accurate as other DLCO(exhaled) methods but is less affected by noise. These findings are verified in comparisons of the discrete-sample method of calculating DLCO(exhaled) to point-sample methods in normal subjects, patients with emphysema, and patients with asthma.
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; ...
2017-12-27
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less
Tailoring Laser Propulsion for Future Applications in Space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eckel, Hans-Albert; Scharring, Stefan
Pulsed laser propulsion may turn out as a low cost alternative for the transportation of small payloads in future. In recent years DLR investigated this technology with the goal of cheaply launching small satellites into low earth orbit (LEO) with payload masses on the order of 5 to 10 kg. Since the required high power pulsed laser sources are yet not at the horizon, DLR focused on new applications based on available laser technology. Space-borne, i.e. in weightlessness, there exist a wide range of missions requiring small thrusters that can be propelled by laser power. This covers space logistic andmore » sample return missions as well as position keeping and attitude control of satellites.First, a report on the proof of concept of a remote controlled laser rocket with a thrust vector steering device integrated in a parabolic nozzle will be given. Second, the road from the previous ground-based flight experiments in earth's gravity using a 100-J class laser to flight experiments with a parabolic thruster in an artificial 2D-zero gravity on an air cushion table employing a 1-J class laser and, with even less energy, new investigations in the field of laser micro propulsion will be reviewed.« less
A fast method for detecting Cryptosporidium parvum oocysts in real world samples
NASA Astrophysics Data System (ADS)
Stewart, Shona; McClelland, Lindy; Maier, John
2005-04-01
Contamination of drinking water with pathogenic microorganisms such as Cryptosporidium has become an increasing concern in recent years. Cryptosporidium oocysts are particularly problematic, as infections caused by this organism can be life threatening in immunocompromised patients. Current methods for monitoring and analyzing water are often laborious and require experts to conduct. In addition, many of the techniques require very specific reagents to be employed. These factors add considerable cost and time to the analytical process. Raman spectroscopy provides specific molecular information on samples, and offers advantages of speed, sensitivity and low cost over current methods of water monitoring. Raman spectroscopy is an optical method that has demonstrated the capability to identify and differentiate microorganisms at the species and strain levels. In addition, this technique has exhibited sensitivities down to the single organism detection limit. We have employed Raman spectroscopy and Raman Chemical Imaging, in conjunction with chemometric techniques, to detect small numbers of oocysts in the presence of interferents derived from real-world water samples. Our investigations have also indicated that Raman Chemical Imaging may provide chemical and physiological information about an oocyst sample which complements information provided by the traditional methods. This work provides evidence that Raman imaging is a useful technique for consideration in the water quality industry.
Blanco-Claraco, José Luis; López-Martínez, Javier; Torres-Moreno, José Luis; Giménez-Fernández, Antonio
2015-01-01
Most experimental fields of science and engineering require the use of data acquisition systems (DAQ), devices in charge of sampling and converting electrical signals into digital data and, typically, performing all of the required signal preconditioning. Since commercial DAQ systems are normally focused on specific types of sensors and actuators, systems engineers may need to employ mutually-incompatible hardware from different manufacturers in applications demanding heterogeneous inputs and outputs, such as small-signal analog inputs, differential quadrature rotatory encoders or variable current outputs. A common undesirable side effect of heterogeneous DAQ hardware is the lack of an accurate synchronization between samples captured by each device. To solve such a problem with low-cost hardware, we present a novel modular DAQ architecture comprising a base board and a set of interchangeable modules. Our main design goal is the ability to sample all sources at predictable, fixed sampling frequencies, with a reduced synchronization mismatch (<1 μs) between heterogeneous signal sources. We present experiments in the field of mechanical engineering, illustrating vibration spectrum analyses from piezoelectric accelerometers and, as a novelty in these kinds of experiments, the spectrum of quadrature encoder signals. Part of the design and software will be publicly released online. PMID:26516865
78 FR 11263 - Reporting and Recordkeeping Requirements Under OMB Review
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-15
... SMALL BUSINESS ADMINISTRATION Reporting and Recordkeeping Requirements Under OMB Review AGENCY: Small Business Administration. ACTION: Notice of reporting requirements submitted for OMB review... notice to: Agency Clearance Officer, Curtis Rich, [email protected] Small Business Administration, 409...
Unsupervised Learning —A Novel Clustering Method for Rolling Bearing Faults Identification
NASA Astrophysics Data System (ADS)
Kai, Li; Bo, Luo; Tao, Ma; Xuefeng, Yang; Guangming, Wang
2017-12-01
To promptly process the massive fault data and automatically provide accurate diagnosis results, numerous studies have been conducted on intelligent fault diagnosis of rolling bearing. Among these studies, such as artificial neural networks, support vector machines, decision trees and other supervised learning methods are used commonly. These methods can detect the failure of rolling bearing effectively, but to achieve better detection results, it often requires a lot of training samples. Based on above, a novel clustering method is proposed in this paper. This novel method is able to find the correct number of clusters automatically the effectiveness of the proposed method is validated using datasets from rolling element bearings. The diagnosis results show that the proposed method can accurately detect the fault types of small samples. Meanwhile, the diagnosis results are also relative high accuracy even for massive samples.
Monitoring xenon purity in the LUX detector with a mass spectrometry system
NASA Astrophysics Data System (ADS)
Balajthy, Jon; LUX Experiment Collaboration
2015-04-01
The LUX dark matter search experiment is a 350 kg two-phase liquid/gas xenon time projection chamber located at the 4850 ft level of the Sanford Underground Research Facility in Lead, SD. To monitor for radioactive impurities such as krypton and impurities which limit charge yield such as oxygen, LUX uses a xenon sampling system consisting of a mass spectrometer and a liquid nitrogen cold trap. The cold trap separates the gaseous impurities from a small sample of xenon and allows them to pass to the mass spectrometer for analysis. We report here on results from the LUX xenon sampling program. We also report on methods to enhance the sensitivity of the cold trap technique in preparation for the next-generation LUX-ZEPLIN experiment which will have even more stringent purity requirements.
The Impact of Surgical Timing in Acute Traumatic Spinal Cord Injury
2016-10-01
However, their study was con - ducted with a small sample of 63 patients and only cer- vical T-SCI, and did not account for other possible factors that... con - tributors are the time of transfer from the site of trauma to the SCI center, the interval between the first medical assessment and surgical plan...requiring surgery will depend upon the availability of the operating rooms and of the spine surgeons, con - sidering the high number of elective cases
NASA Astrophysics Data System (ADS)
Bhattacharyya, Kaustuve; Ke, Chih-Ming; Huang, Guo-Tsai; Chen, Kai-Hsiung; Smilde, Henk-Jan H.; Fuchs, Andreas; Jak, Martin; van Schijndel, Mark; Bozkurt, Murat; van der Schaar, Maurits; Meyer, Steffen; Un, Miranda; Morgan, Stephen; Wu, Jon; Tsai, Vincent; Liang, Frida; den Boef, Arie; ten Berge, Peter; Kubis, Michael; Wang, Cathy; Fouquet, Christophe; Terng, L. G.; Hwang, David; Cheng, Kevin; Gau, TS; Ku, Y. C.
2013-04-01
Aggressive on-product overlay requirements in advanced nodes are setting a superior challenge for the semiconductor industry. This forces the industry to look beyond the traditional way-of-working and invest in several new technologies. Integrated metrology2, in-chip overlay control, advanced sampling and process correction-mechanism (using the highest order of correction possible with scanner interface today), are a few of such technologies considered in this publication.
2007-03-01
use. At the end of these lines, two high temperature hoses provide flexibility for connection. 57 Figure 28: Heater emergency shutoff...stainless steel braided , Teflon tubing. This enables the probe to be easily positioned as required. The oil temperature is measured by a K-type...used to cool emissions probe. The sample gases exit the probe into a 1/4” piece of braided , flexible Teflon tubing to provide for probe
Spacecraft Guidance, Navigation, and Control Visualization Tool
NASA Technical Reports Server (NTRS)
Mandic, Milan; Acikmese, Behcet; Blackmore, Lars
2011-01-01
G-View is a 3D visualization tool for supporting spacecraft guidance, navigation, and control (GN&C) simulations relevant to small-body exploration and sampling (see figure). The tool is developed in MATLAB using Virtual Reality Toolbox and provides users with the ability to visualize the behavior of their simulations, regardless of which programming language (or machine) is used to generate simulation results. The only requirement is that multi-body simulation data is generated and placed in the proper format before applying G-View.
Radon Spectrum and Its Application for Small Moving Target Detection
2015-04-01
cumulative distribution function) starts separating from the exact distribution only at the very end of the upper tail, normally in the 610−= faP or...131059.1 −×= PNtotal (15) According to (15), for instance, to determine the threshold for 610−= faP , a total 91059.1 × samples shall...is the incomplete Gamma function9 Suppose we require a false-alarm rate of 610−= faP for the original data, therefore after non-coherent multi
NASA Astrophysics Data System (ADS)
Hill, Kristina M.
Modified imbibition tests were performed on 69 subsurface samples from Monterey Formation reservoirs in the San Joaquin Valley to measure wettability variation as a result of composition and silica phase change. Contact angle tests were also performed on 6 chert samples from outcrop and 3 nearly pure mineral samples. Understanding wettability is important because it is a key factor in reservoir fluid distribution and movement, and its significance rises as porosity and permeability decrease and fluid interactions with reservoir grain surface area increase. Although the low permeability siliceous reservoirs of the Monterey Formation are economically important and prolific, a greater understanding of factors that alter their wettability will help better develop them. Imbibition results revealed a strong trend of decreased wettability to oil with increased detrital content in opal-CT phase samples. Opal-A phase samples exhibited less wettability to oil than both opal-CT and quartz phase samples of similar detrital content. Subsurface reservoir samples from 3 oil fields were crushed to eliminate the effect of capillary pressure and cleansed of hydrocarbons to eliminate wettability alterations by asphaltene, then pressed into discs of controlled density. Powder discs were tested for wettability by dispensing a controlled volume of water and motor oil onto the surface and measuring the time required for each fluid to imbibe into the sample. The syringe and software of a CAM101 tensiometer were used to control the amount of fluid dispensed onto each sample, and imbibition completion times were determined by high-speed photography for water drops; oil drop imbibition was significantly slower and imbibition was timed and determined visually. Contact angle of water and oil drops on polished chert and mineral sample surfaces was determined by image analysis and the Young-Laplace equation. Oil imbibition was significantly slower with increased detrital composition and faster with increased silica content in opal-CT and quartz phase samples, implying decreased wettability to oil with increased detrital (clay) content. However, contact angle tests showed that opal-CT is more wetting to oil with increased detritus and results for oil on quartz-phase samples were inconsistent between different proxies for detritus over their very small compositional range. Water contact angle trends also showed inconsistent wetting trends compared to imbibition tests. We believe this is because the small range in bulk detrital composition between the "pure" samples used in contact angle tests was close to analytical error and because small-scale spatial compositional variability may be significant enough to effect wettability. These experiments show that compositional variables significantly affect wettability, outweighing the effect of silica phase.
Walker, Donald M.; Leys, Jacob E.; Dunham, Kelly E.; Oliver, Joshua C.; Schiller, Emily E.; Stephenson, Kelsey S.; Kimrey, John T.; Wooten, Jessica; Rogers, Mark W.
2017-01-01
Environmental DNA (eDNA) can be used as an assessment tool to detect populations of threatened species and provide fine-scale data required to make management decisions. The objectives of this project were to use quantitative PCR (qPCR) to: (i) detect spiked salamander DNA in soil, (ii) quantify eDNA degradation over time, (iii) determine detectability of salamander eDNA in a terrestrial environment using soil, faeces, and skin swabs, (iv) detect salamander eDNA in a mesocosm experiment. Salamander eDNA was positively detected in 100% of skin swabs and 66% of faecal samples and concentrations did not differ between the two sources. However, eDNA was not detected in soil samples collected from directly underneath wild-caught living salamanders. Salamander genomic DNA (gDNA) was detected in all qPCR reactions when spiked into soil at 10.0, 5.0, and 1.0 ng/g soil and spike concentration had a significant effect on detected concentrations. Only 33% of samples showed recoverable eDNA when spiked with 0.25 ng/g soil, which was the low end of eDNA detection. To determine the rate of eDNA degradation, gDNA (1 ng/g soil) was spiked into soil and quantified over seven days. Salamander eDNA concentrations decreased across days, but eDNA was still amplifiable at day 7. Salamander eDNA was detected in two of 182 mesocosm soil samples over 12 weeks (n = 52 control samples; n = 65 presence samples; n = 65 eviction samples). The discrepancy in detection success between experiments indicates the potential challenges for this method to be used as a monitoring technique for small-bodied wild terrestrial salamander populations.
Walker, Donald M; Leys, Jacob E; Dunham, Kelly E; Oliver, Joshua C; Schiller, Emily E; Stephenson, Kelsey S; Kimrey, John T; Wooten, Jessica; Rogers, Mark W
2017-11-01
Environmental DNA (eDNA) can be used as an assessment tool to detect populations of threatened species and provide fine-scale data required to make management decisions. The objectives of this project were to use quantitative PCR (qPCR) to: (i) detect spiked salamander DNA in soil, (ii) quantify eDNA degradation over time, (iii) determine detectability of salamander eDNA in a terrestrial environment using soil, faeces, and skin swabs, (iv) detect salamander eDNA in a mesocosm experiment. Salamander eDNA was positively detected in 100% of skin swabs and 66% of faecal samples and concentrations did not differ between the two sources. However, eDNA was not detected in soil samples collected from directly underneath wild-caught living salamanders. Salamander genomic DNA (gDNA) was detected in all qPCR reactions when spiked into soil at 10.0, 5.0, and 1.0 ng/g soil and spike concentration had a significant effect on detected concentrations. Only 33% of samples showed recoverable eDNA when spiked with 0.25 ng/g soil, which was the low end of eDNA detection. To determine the rate of eDNA degradation, gDNA (1 ng/g soil) was spiked into soil and quantified over seven days. Salamander eDNA concentrations decreased across days, but eDNA was still amplifiable at day 7. Salamander eDNA was detected in two of 182 mesocosm soil samples over 12 weeks (n = 52 control samples; n = 65 presence samples; n = 65 eviction samples). The discrepancy in detection success between experiments indicates the potential challenges for this method to be used as a monitoring technique for small-bodied wild terrestrial salamander populations. © 2017 John Wiley & Sons Ltd.
Integrating sphere based reflectance measurements for small-area semiconductor samples
NASA Astrophysics Data System (ADS)
Saylan, S.; Howells, C. T.; Dahlem, M. S.
2018-05-01
This article describes a method that enables reflectance spectroscopy of small semiconductor samples using an integrating sphere, without the use of additional optical elements. We employed an inexpensive sample holder to measure the reflectance of different samples through 2-, 3-, and 4.5-mm-diameter apertures and applied a mathematical formulation to remove the bias from the measured spectra caused by illumination of the holder. Using the proposed method, the reflectance of samples fabricated using expensive or rare materials and/or low-throughput processes can be measured. It can also be incorporated to infer the internal quantum efficiency of small-area, research-level solar cells. Moreover, small samples that reflect light at large angles and develop scattering may also be measured reliably, by virtue of an integrating sphere insensitive to directionalities.
75 FR 63215 - Reporting and Recordkeeping Requirements Under OMB Review
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-14
... SMALL BUSINESS ADMINISTRATION Reporting and Recordkeeping Requirements Under OMB Review AGENCY: Small Business Administration. ACTION: Notice of Reporting Requirements Submitted for OMB Review... notice to: Agency Clearance Officer, Jacqueline White, Small Business Administration, 409 3rd Street, SW...
75 FR 23310 - Reporting and Recordkeeping Requirements Under OMB Review
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-03
... SMALL BUSINESS ADMINISTRATION Reporting and Recordkeeping Requirements Under OMB Review AGENCY: Small Business Administration. ACTION: Notice of Reporting Requirements Submitted for OMB Review... notice to: Agency Clearance Officer, Jacqueline White, Small Business Administration, 409 3rd Street, SW...
76 FR 408 - Reporting and Recordkeeping Requirements Under OMB Review
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-04
... SMALL BUSINESS ADMINISTRATION Reporting and Recordkeeping Requirements Under OMB Review AGENCY: Small Business Administration. ACTION: Notice of reporting requirements submitted for OMB review... notice to: Agency Clearance Officer, Jacqueline White, Small Business Administration, 409 3rd Street, SW...
76 FR 18821 - Reporting and Recordkeeping Requirements Under OMB Review
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-05
... SMALL BUSINESS ADMINISTRATION Reporting and Recordkeeping Requirements Under OMB Review AGENCY: Small Business Administration. ACTION: Notice of reporting requirements submitted for OMB review... notice to: Agency Clearance Officer, Jacqueline White, Small Business Administration, 409 3rd Street, SW...
48 CFR 19.202-5 - Data collection and reporting requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... REGULATION SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS Policies 19.202-5 Data collection and reporting requirements. Agencies must measure the extent of small business participation in their acquisition programs by... business, veteran-owned small business, service-disabled veteran-owned small business, HUBZone small...
48 CFR 19.202-5 - Data collection and reporting requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... REGULATION SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS Policies 19.202-5 Data collection and reporting requirements. Agencies must measure the extent of small business participation in their acquisition programs by... business, veteran-owned small business, service-disabled veteran-owned small business, HUBZone small...
48 CFR 19.202-5 - Data collection and reporting requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... REGULATION SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS Policies 19.202-5 Data collection and reporting requirements. Agencies must measure the extent of small business participation in their acquisition programs by... business, veteran-owned small business, service-disabled veteran-owned small business, HUBZone small...
48 CFR 19.202-5 - Data collection and reporting requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... REGULATION SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS Policies 19.202-5 Data collection and reporting requirements. Agencies must measure the extent of small business participation in their acquisition programs by... business, veteran-owned small business, service-disabled veteran-owned small business, HUBZone small...
Roehrbacher, Friedrich; Bankstahl, Jens P; Bankstahl, Marion; Wanek, Thomas; Stanek, Johann; Sauberer, Michael; Muellauer, Julia; Schroettner, Thales; Langer, Oliver; Kuntner, Claudia
2015-12-01
For positron emission tomography (PET) kinetic modelling, an accurate determination of the arterial input function is required. In this study, a blood sampling system was developed and tested using different radiotracers in rats. The detector consists of pairs of lutetium yttrium oxyorthosilicate (LYSO) detectors, photomultiplier tubes and lead shield assembled within a steel casing working in coincidence mode. Rats were cannulated with microtubes in the femoral artery and vein for arterial blood sampling as well as administration of the PET tracers. Connected PTFE microtubes were centred between the LYSO crystals using a special holder. To enhance sensitivity, three layers with two coils were used. A flexible tube pump was used to ensure a constant blood flow. Performance of the detector was assessed with [(18)F]fludeoxyglucose (FDG), [(18)F]ciprofloxacin, (R)-[(11)C]verapamil, [(11)C]tariquidar, [(11)C]mephobarbital and [(11)C]MC113. Obtained input function curves were compared with manual samples drawn every 5 s during the first 3 min and further on at 5, 10, 20, 30, 40, 50 and 60 min after radiotracer injection. After manual sampling, an arterio/venous shunt was established. Shape and area-under-the-curve (AUC; Bq/μl*h) of the input functions were evaluated. The developed detector system provided an absolute sensitivity of 6.5%. Maximum peak values agreed well between manual samples and the detector with a mean difference of -0.4% ± 7.0% (max 12.0%, min -9.9%). AUC values also exhibited an excellent correlation (R = 0.996) between manual sampling and detector measurements with a mean difference of 9.3% ± 9.7% (max 24.1%, min -3.2%). The system was able to measure peak blood activity concentration levels of 110 to 2,000 Bq/μl which corresponds to injected activities from 5.5 to 100 MBq depending on the used radiotracer, applied volume and weight of the animal. This study demonstrates that the developed blood sampling system can be used for in vivo small animal PET studies in rats in a reliable way. The usage of the systems enhances the accuracy of the input curve as handling of small blood samples especially with low activity (as for C-11) is prone to measurement errors. Additionally, the radiation dose of the experimenters can be reduced, as it is not required anymore to continuously draw samples where the personal is in close contact to the radioactive animals and blood.
Spatially confined low-power optically pumped ultrafast synchrotron x-ray nanodiffraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Joonkyu; Zhang, Qingteng; Chen, Pice
2015-08-27
The combination of ultrafast optical excitation and time-resolved synchrotron x-ray nanodiffraction provides unique insight into the photoinduced dynamics of materials, with the spatial resolution required to probe individual nanostructures or small volumes within heterogeneous materials. Optically excited x-ray nanobeam experiments are challenging because the high total optical power required for experimentally relevant optical fluences leads to mechanical instability due to heating. For a given fluence, tightly focusing the optical excitation reduces the average optical power by more than three orders of magnitude and thus ensures sufficient thermal stability for x-ray nanobeam studies. Delivering optical pulses via a scannable fiber-coupled opticalmore » objective provides a well-defined excitation geometry during rotation and translation of the sample and allows the selective excitation of isolated areas within the sample. Finally, experimental studies of the photoinduced lattice dynamics of a 35 nm BiFeO 3 thin film on a SrTiO 3 substrate demonstrate the potential to excite and probe nanoscale volumes.« less
Efficient Bayesian mixed model analysis increases association power in large cohorts
Loh, Po-Ru; Tucker, George; Bulik-Sullivan, Brendan K; Vilhjálmsson, Bjarni J; Finucane, Hilary K; Salem, Rany M; Chasman, Daniel I; Ridker, Paul M; Neale, Benjamin M; Berger, Bonnie; Patterson, Nick; Price, Alkes L
2014-01-01
Linear mixed models are a powerful statistical tool for identifying genetic associations and avoiding confounding. However, existing methods are computationally intractable in large cohorts, and may not optimize power. All existing methods require time cost O(MN2) (where N = #samples and M = #SNPs) and implicitly assume an infinitesimal genetic architecture in which effect sizes are normally distributed, which can limit power. Here, we present a far more efficient mixed model association method, BOLT-LMM, which requires only a small number of O(MN)-time iterations and increases power by modeling more realistic, non-infinitesimal genetic architectures via a Bayesian mixture prior on marker effect sizes. We applied BOLT-LMM to nine quantitative traits in 23,294 samples from the Women’s Genome Health Study (WGHS) and observed significant increases in power, consistent with simulations. Theory and simulations show that the boost in power increases with cohort size, making BOLT-LMM appealing for GWAS in large cohorts. PMID:25642633
United States planetary rover status: 1989
NASA Technical Reports Server (NTRS)
Pivirotto, Donna L. S.; Dias, William C.
1990-01-01
A spectrum of concepts for planetary rovers and rover missions, is covered. Rovers studied range from tiny micro rovers to large and highly automated vehicles capable of traveling hundreds of kilometers and performing complex tasks. Rover concepts are addressed both for the Moon and Mars, including a Lunar/Mars common rover capable of supporting either program with relatively small modifications. Mission requirements considered include both Science and Human Exploration. Studies include a range of autonomy in rovers, from interactive teleoperated systems to those requiring and onboard System Executive making very high level decisions. Both high and low technology rover options are addressed. Subsystems are described for a representative selection of these rovers, including: Mobility, Sample Acquisition, Science, Vehicle Control, Thermal Control, Local Navigation, Computation and Communications. System descriptions of rover concepts include diagrams, technology levels, system characteristics, and performance measurement in terms of distance covered, samples collected, and area surveyed for specific representative missions. Rover development schedules and costs are addressed for Lunar and Mars exploration initiatives.
Application of statistical process control to qualitative molecular diagnostic assays.
O'Brien, Cathal P; Finn, Stephen P
2014-01-01
Modern pathology laboratories and in particular high throughput laboratories such as clinical chemistry have developed a reliable system for statistical process control (SPC). Such a system is absent from the majority of molecular laboratories and where present is confined to quantitative assays. As the inability to apply SPC to an assay is an obvious disadvantage this study aimed to solve this problem by using a frequency estimate coupled with a confidence interval calculation to detect deviations from an expected mutation frequency. The results of this study demonstrate the strengths and weaknesses of this approach and highlight minimum sample number requirements. Notably, assays with low mutation frequencies and detection of small deviations from an expected value require greater sample numbers to mitigate a protracted time to detection. Modeled laboratory data was also used to highlight how this approach might be applied in a routine molecular laboratory. This article is the first to describe the application of SPC to qualitative laboratory data.
DETAILED ABUNDANCES OF STARS WITH SMALL PLANETS DISCOVERED BY KEPLER. I. THE FIRST SAMPLE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schuler, Simon C.; Vaz, Zachary A.; Santrich, Orlando J. Katime
2015-12-10
We present newly derived stellar parameters and the detailed abundances of 19 elements of seven stars with small planets discovered by NASA's Kepler Mission. Each star, save one, has at least one planet with a radius ≤1.6 R{sub ⊕}, suggesting a primarily rocky composition. The stellar parameters and abundances are derived from high signal-to-noise ratio, high-resolution echelle spectroscopy obtained with the 10 m Keck I telescope and High Resolution Echelle Spectrometer using standard spectroscopic techniques. The metallicities of the seven stars range from −0.32 to +0.13 dex, with an average metallicity that is subsolar, supporting previous suggestions that, unlike Jupiter-typemore » giant planets, small planets do not form preferentially around metal-rich stars. The abundances of elements other than iron are in line with a population of Galactic disk stars, and despite our modest sample size, we find hints that the compositions of stars with small planets are similar to stars without known planets and with Neptune-size planets, but not to those of stars with giant planets. This suggests that the formation of small planets does not require exceptional host-star compositions and that small planets may be ubiquitous in the Galaxy. We compare our derived abundances (which have typical uncertainties of ≲0.04 dex) to the condensation temperature of the elements; a correlation between the two has been suggested as a possible signature of rocky planet formation. None of the stars demonstrate the putative rocky planet signature, despite at least three of the stars having rocky planets estimated to contain enough refractory material to produce the signature, if real. More detailed abundance analyses of stars known to host small planets are needed to verify our results and place ever more stringent constraints on planet formation models.« less
Small-Noise Analysis and Symmetrization of Implicit Monte Carlo Samplers
Goodman, Jonathan; Lin, Kevin K.; Morzfeld, Matthias
2015-07-06
Implicit samplers are algorithms for producing independent, weighted samples from multivariate probability distributions. These are often applied in Bayesian data assimilation algorithms. We use Laplace asymptotic expansions to analyze two implicit samplers in the small noise regime. Our analysis suggests a symmetrization of the algorithms that leads to improved implicit sampling schemes at a relatively small additional cost. Here, computational experiments confirm the theory and show that symmetrization is effective for small noise sampling problems.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-11
.... Reporting Requirements, assessment area delineation, 2 hours; small business and small farm loan data, 8.... Reporting Requirements, assessment area delineation, 72; small business and small farm loan data, 72... 62807
Park, Min Cheol; Kim, Moojong; Lim, Gun Taek; Kang, Sung Min; An, Seong Soo A; Kim, Tae Song; Kang, Ji Yoon
2016-06-21
Multiwell plates are regularly used in analytical research and clinical diagnosis but often require laborious washing steps and large sample or reagent volumes (typically, 100 μL per well). To overcome such drawbacks in the conventional multiwell plate, we present a novel microchannel-connected multiwell plate (μCHAMP) that can be used for automated disease biomarker detection in a small sample volume by performing droplet-based magnetic bead immunoassay inside the plate. In this μCHAMP-based immunoassay platform, small volumes (30-50 μL) of aqueous-phase working droplets are stably confined within each well by the simple microchannel structure (200-300 μm in height and 0.5-1 mm in width), and magnetic beads are exclusively transported into an adjacent droplet through the oil-filled microchannels assisted by a magnet array aligned beneath and controlled by a XY-motorized stage. Using this μCHAMP-based platform, we were able to perform parallel detection of synthetic amyloid beta (Aβ) oligomers as a model analyte for the early diagnosis of Alzheimer's disease (AD). This platform easily simplified the laborious and consumptive immunoassay procedure by achieving automated parallel immunoassay (32 assays per operation in 3-well connected 96-well plate) within 1 hour and at low sample consumption (less than 10 μL per assay) with no cumbersome manual washing step. Moreover, it could detect synthetic Aβ oligomers even below 10 pg mL(-1) concentration with a calculated detection limit of ∼3 pg mL(-1). Therefore, the μCHAMP and droplet-based magnetic bead immunoassay, with the combination of XY-motorized magnet array, would be a useful platform in the diagnosis of human disease, including AD, which requires low consumption of the patient's body fluid sample and automation of the entire immunoassay procedure for high processing capacity.
Gode, David; Volmer, Dietrich A
2013-05-15
Magnetic beads are often used for serum profiling of peptide and protein biomarkers. In these assays, the bead-bound analytes are eluted from the beads prior to mass spectrometric analysis. This study describes a novel matrix-assisted laser desorption/ionization (MALDI) technique for direct application and focusing of magnetic beads to MALDI plates by means of dedicated micro-magnets as sample spots. Custom-made MALDI plates with magnetic focusing spots were made using small nickel-coated neodymium micro-magnets integrated into a stainless steel plate in a 16 × 24 (384) pattern. For demonstrating the proof-of-concept, commercial C-18 magnetic beads were used for the extraction of a test compound (reserpine) from aqueous solution. Experiments were conducted to study focusing abilities, the required laser energies, the influence of a matrix compound, dispensing techniques, solvent choice and the amount of magnetic beads. Dispensing the magnetic beads onto the micro-magnet sample spots resulted in immediate and strong binding to the magnetic surface. Light microscope images illustrated the homogeneous distribution of beads across the surfaces of the magnets, when the entire sample volume containing the beads was pipetted onto the surface. Subsequent MALDI analysis of the bead-bound analyte demonstrated excellent and reproducible ionization yields. The surface-assisted laser desorption/ionization (SALDI) properties of the strongly light-absorbing γ-Fe2O3-based beads resulted in similar ionization efficiencies to those obtained from experiments with an additional MALDI matrix compound. This feasibility study successfully demonstrated the magnetic focusing abilities for magnetic bead-bound analytes on a novel MALDI plate containing small micro-magnets as sample spots. One of the key advantages of this integrated approach is that no elution steps from magnetic beads were required during analyses compared with conventional bead experiments. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-04-01
In the last three decades, an increasing number of studies analyzed spatial patterns in throughfall to investigate the consequences of rainfall redistribution for biogeochemical and hydrological processes in forests. In the majority of cases, variograms were used to characterize the spatial properties of the throughfall data. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and an appropriate layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation methods on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with heavy outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling), and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the numbers recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes << 200, our current knowledge about throughfall spatial variability stands on shaky ground.
NASA Astrophysics Data System (ADS)
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-09-01
In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes ≪200, currently available data are prone to large uncertainties.
An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates
Hobbs, Michael T.; Brehme, Cheryl S.
2017-01-01
Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.
An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates.
Hobbs, Michael T; Brehme, Cheryl S
2017-01-01
Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.
Using the Student's "t"-Test with Extremely Small Sample Sizes
ERIC Educational Resources Information Center
de Winter, J. C .F.
2013-01-01
Researchers occasionally have to work with an extremely small sample size, defined herein as "N" less than or equal to 5. Some methodologists have cautioned against using the "t"-test when the sample size is extremely small, whereas others have suggested that using the "t"-test is feasible in such a case. The present…
ERIC Educational Resources Information Center
McNeish, Daniel M.
2016-01-01
Mixed-effects models (MEMs) and latent growth models (LGMs) are often considered interchangeable save the discipline-specific nomenclature. Software implementations of these models, however, are not interchangeable, particularly with small sample sizes. Restricted maximum likelihood estimation that mitigates small sample bias in MEMs has not been…
Liu, Yang; Wilson, W David
2010-01-01
Surface plasmon resonance (SPR) technology with biosensor surfaces has become a widely-used tool for the study of nucleic acid interactions without any labeling requirements. The method provides simultaneous kinetic and equilibrium characterization of the interactions of biomolecules as well as small molecule-biopolymer binding. SPR monitors molecular interactions in real time and provides significant advantages over optical or calorimetic methods for systems with strong binding coupled to small spectroscopic signals and/or reaction heats. A detailed and practical guide for nucleic acid interaction analysis using SPR-biosensor methods is presented. Details of the SPR technology and basic fundamentals are described with recommendations on the preparation of the SPR instrument, sensor chips, and samples, as well as extensive information on experimental design, quantitative and qualitative data analysis and presentation. A specific example of the interaction of a minor-groove-binding agent with DNA is evaluated by both kinetic and steady-state SPR methods to illustrate the technique. Since the molecules that bind cooperatively to specific DNA sequences are attractive for many applications, a cooperative small molecule-DNA interaction is also presented.
78 FR 65744 - Reporting and Recordkeeping Requirements Under OMB Review
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-01
... SMALL BUSINESS ADMINISTRATION Reporting and Recordkeeping Requirements Under OMB Review AGENCY: Small Business Administration. ACTION: Notice of 30 day Reporting Requirements Submitted for OMB Review... concerning this notice to: Agency Clearance Officer, Curtis Rich, Small Business Administration, 409 3rd...
Mower, Timothy E.; Higgins, Jerry D.; Yang, In C.; Peters, Charles A.
1994-01-01
Study of the hydrologic system at Yucca Mountain, Nevada, requires the extraction of pore-water samples from welded and nonwelded, unsaturated tuffs. Two compression methods (triaxial compression and one-dimensional compression) were examined to develop a repeatable extraction technique and to investigate the effects of the extraction method on the original pore-fluid composition. A commercially available triaxial cell was modified to collect pore water expelled from tuff cores. The triaxial cell applied a maximum axial stress of 193 MPa and a maximum confining stress of 68 MPa. Results obtained from triaxial compression testing indicated that pore-water samples could be obtained from nonwelded tuff cores that had initial moisture contents as small as 13 percent (by weight of dry soil). Injection of nitrogen gas while the test core was held at the maximum axial stress caused expulsion of additional pore water and reduced the required initial moisture content from 13 to 11 percent. Experimental calculations, together with experience gained from testing moderately welded tuff cores, indicated that the triaxial cell used in this study could not apply adequate axial or confining stress to expel pore water from cores of densely welded tuffs. This concern led to the design, fabrication, and testing of a one-dimensional compression cell. The one-dimensional compression cell used in this study was constructed from hardened 4340-alloy and nickel-alloy steels and could apply a maximum axial stress of 552 MPa. The major components of the device include a corpus ring and sample sleeve to confine the sample, a piston and base platen to apply axial load, and drainage plates to transmit expelled water from the test core out of the cell. One-dimensional compression extracted pore water from nonwelded tuff cores that had initial moisture contents as small as 7.6 percent; pore water was expelled from densely welded tuff cores that had initial moisture contents as small as 7.7 percent. Injection of nitrogen gas at the maximum axial stress did not produce additional pore water from nonwelded tuff cores, but was critical to recovery of pore water from densely welded tuff cores. Gas injection reduced the required initial moisture content in welded tuff cores from 7.7 to 6.5 percent. Based on the mechanical ability of a pore-water extraction method to remove water from welded and nonwelded tuff cores, one-dimensional compression is a more effective extraction method than triaxial compression. However, because the effects that one-dimensional compression has on pore-water chemistry are not completely understood, additional testing will be needed to verify that this method is suitable for pore-water extraction from Yucca Mountain tuffs.
Baráth, Dániel; Jaksa-Czotter, Nikoletta; Molnár, János; Varga, Tünde; Balássy, Júlia; Szabó, Luca Krisztina; Kirilla, Zoltán; Tusnády, Gábor E; Preininger, Éva; Várallyay, Éva
2018-06-11
Fruit trees, such as apricot trees, are constantly exposed to the attack of viruses. As they are propagated in a vegetative way, this risk is present not only in the field, where they remain for decades, but also during their propagation. Metagenomic diagnostic methods, based on next generation sequencing (NGS), offer unique possibilities to reveal all the present pathogens in the investigated sample. Using NGS of small RNAs, a special field of these techniques, we tested leaf samples of different varieties of apricot originating from an isolator house or open field stock nursery. As a result, we identified Cherry virus A (CVA) and little cherry virus 1 (LChV-1) for the first time in Hungary. The NGS results were validated by RT-PCR and also by Northern blot in the case of CVA. Cloned and Sanger sequenced viral-specific PCR products enabled us to investigate their phylogenetic relationships. However, since these pathogens have not been described in our country before, their role in symptom development and modification during co-infection with other viruses requires further investigation.
Chang, Yaw-Jen; Ho, Ching-Yuan; Zhou, Xin-Miao; Yen, Hsiu-Rong
2018-04-15
Blood typing assay is a critical test to ensure the serological compatibility of a donor and an intended recipient prior to a blood transfusion. This paper presents a microfluidic blood typing system using a small quantity of blood sample to determine the degree of agglutination of red blood cell (RBC). Two measuring methods were proposed: impedimetric measurement and electroanalytical measurement. The charge transfer resistance in the impedimetric measurement and the power parameter in the electroanalytical measurement were used for the analysis of agglutination level. From the experimental results, both measuring methods provide quantitative results, and the parameters are linearly and monotonically related to the degree of RBC agglutination. However, the electroanalytical measurement is more reliable than the impedimetric technique because the impedimetric measurement may suffer from many influencing factors, such as chip conditions. Five levels from non-agglutination (level 0) to strong agglutination (level 4+) can be discriminated in this study, conforming to the clinical requirement to prevent any risks in transfusion. Copyright © 2017 Elsevier B.V. All rights reserved.
Survey of Large Methane Emitters in North America
NASA Astrophysics Data System (ADS)
Deiker, S.
2017-12-01
It has been theorized that methane emissions in the oil and gas industry follow log normal or "fat tail" distributions, with large numbers of small sources for every very large source. Such distributions would have significant policy and operational implications. Unfortunately, by their very nature such distributions would require large sample sizes to verify. Until recently, such large-scale studies would be prohibitively expensive. The largest public study to date sampled 450 wells, an order of magnitude too low to effectively constrain these models. During 2016 and 2017, Kairos Aerospace conducted a series of surveys the LeakSurveyor imaging spectrometer, mounted on light aircraft. This small, lightweight instrument was designed to rapidly locate large emission sources. The resulting survey covers over three million acres of oil and gas production. This includes over 100,000 wells, thousands of storage tanks and over 7,500 miles of gathering lines. This data set allows us to now probe the distribution of large methane emitters. Results of this survey, and implications for methane emission distribution, methane policy and LDAR will be discussed.
Hooda, Vinita; Gahlaut, Anjum; Gothwal, Ashish; Hooda, Vikas
2018-04-27
Clinical manifestations of the elevated plasma triacylglycerol (TG) include a greater prevalence of atherosclerotic heart disease, acute pancreatitis, diabetes mellitus, hypertension, and ischemic vascular disease. Hence, these significant health troubles have attracted scientific attention for the precise detection of TG in biological samples. Numerous techniques have been employed to quantify TG over many decades, but biosensors hold the leading position owing to their superior traits such as highly specific recognition for target molecules, accuracy, minituarization, small sample requirement and rapid response. Enzyme-based electrochemical biosensors represent an instantaneous resolution for the foremost bottlenecks constraining laboratory prototypes to reach real time bedside applications. We highlight the choice of transducers and constructive strategies to design high-performance biosensor for the quantification of triglycerides in sera and early diagnosis of health problems related to it. In the present review, a small effort has been made to emphasize the significant role of enzymes, nanostructured metal oxides, graphene, conducting polypyrrole, nanoparticles, porous silicon, EISCAP and ENFET in enabling TG biosensors more proficient and taking a revolutionary step forward.
Analysis of RNA structure using small-angle X-ray scattering
Cantara, William A.; Olson, Erik D.; Musier-Forsyth, Karin
2016-01-01
In addition to their role in correctly attaching specific amino acids to cognate tRNAs, aminoacyl-tRNA synthetases (aaRS) have been found to possess many alternative functions and often bind to and act on other nucleic acids. In contrast to the well-defined 3D structure of tRNA, the structures of many of the other RNAs recognized by aaRSs have not been solved. Despite advances in the use of X-ray crystallography (XRC), nuclear magnetic resonance (NMR) spectroscopy and cryo-electron microscopy (cryo-EM) for structural characterization of biomolecules, significant challenges to solving RNA structures still exist. Recently, small-angle X-ray scattering (SAXS) has been increasingly employed to characterize the 3D structures of RNAs and RNA-protein complexes. SAXS is capable of providing low-resolution tertiary structure information under physiological conditions and with less intensive sample preparation and data analysis requirements than XRC, NMR and cryo-EM. In this article, we describe best practices involved in the process of RNA and RNA-protein sample preparation, SAXS data collection, data analysis, and structural model building. PMID:27777026
Detecting Superior Face Recognition Skills in a Large Sample of Young British Adults
Bobak, Anna K.; Pampoulov, Philip; Bate, Sarah
2016-01-01
The Cambridge Face Memory Test Long Form (CFMT+) and Cambridge Face Perception Test (CFPT) are typically used to assess the face processing ability of individuals who believe they have superior face recognition skills. Previous large-scale studies have presented norms for the CFPT but not the CFMT+. However, previous research has also highlighted the necessity for establishing country-specific norms for these tests, indicating that norming data is required for both tests using young British adults. The current study addressed this issue in 254 British participants. In addition to providing the first norm for performance on the CFMT+ in any large sample, we also report the first UK specific cut-off for superior face recognition on the CFPT. Further analyses identified a small advantage for females on both tests, and only small associations between objective face recognition skills and self-report measures. A secondary aim of the study was to examine the relationship between trait or social anxiety and face processing ability, and no associations were noted. The implications of these findings for the classification of super-recognizers are discussed. PMID:27713706
Global preamplification simplifies targeted mRNA quantification
Kroneis, Thomas; Jonasson, Emma; Andersson, Daniel; Dolatabadi, Soheila; Ståhlberg, Anders
2017-01-01
The need to perform gene expression profiling using next generation sequencing and quantitative real-time PCR (qPCR) on small sample sizes and single cells is rapidly expanding. However, to analyse few molecules, preamplification is required. Here, we studied global and target-specific preamplification using 96 optimised qPCR assays. To evaluate the preamplification strategies, we monitored the reactions in real-time using SYBR Green I detection chemistry followed by melting curve analysis. Next, we compared yield and reproducibility of global preamplification to that of target-specific preamplification by qPCR using the same amount of total RNA. Global preamplification generated 9.3-fold lower yield and 1.6-fold lower reproducibility than target-specific preamplification. However, the performance of global preamplification is sufficient for most downstream applications and offers several advantages over target-specific preamplification. To demonstrate the potential of global preamplification we analysed the expression of 15 genes in 60 single cells. In conclusion, we show that global preamplification simplifies targeted gene expression profiling of small sample sizes by a flexible workflow. We outline the pros and cons for global preamplification compared to target-specific preamplification. PMID:28332609
Schaefer, J; Burckhardt, B B; Tins, J; Bartel, A; Laeer, S
2017-12-01
Heart failure is well investigated in adults, but data in children is lacking. To overcome this shortage of reliable data, appropriate bioanalytical assays are required. Development and validation of a bioanalytical assay for the determination of aldosterone concentrations in small sample volumes applicable to clinical studies under Good Clinical Laboratory Practice. An immunoassay was developed based on a commercially available enzyme-linked immunosorbent assay and validated according to current bioanalytical guidelines of the EMA and FDA. The assay (range 31.3-1000 pg/mL [86.9-2775 pmol/L]) is characterized by a between-run accuracy from - 3.8% to - 0.8% and a between-run imprecision ranging from 4.9% to 8.9% (coefficient of variation). For within-run accuracy, the relative error was between - 11.1% and + 9.0%, while within-run imprecision ranged from 1.2% to 11.8% (CV). For parallelism and dilutional linearity, the relative error of back-calculated concentrations varied from - 14.1% to + 8.4% and from - 7.4% to + 10.5%, respectively. The immunoassay is compliant with the bioanalytical guidelines of the EMA and FDA and allows accurate and precise aldosterone determinations. As the assay can run low-volume samples, it is especially valuable for pediatric investigations.
Inter-correlations between Cloninger's temperament dimensions-- a meta-analysis.
Miettunen, Jouko; Lauronen, Erika; Kantojärvi, Liisa; Veijola, Juha; Joukamaa, Matti
2008-07-15
The Temperament and Character Inventory (TCI) was developed to measure the following temperament dimensions: novelty seeking (NS), harm avoidance (HA), reward dependence (RD) and persistence (P). These four dimensions of temperament were originally proposed to be independent of one another. In this study the inter-relationships between the dimensions were studied with meta-analytic techniques. We also studied the effects of sociodemographic factors (location of the study, mean age and gender distribution) on correlations between temperament dimensions. We searched studies on healthy (non-clinical) populations that used the TCI (version 9), and that had a required sample size of at least 100. The search resulted in 16 articles. The resulted pooled correlation coefficient was medium level between NS and HA (-0.27). Correlations were small for HA-P (-0.20), NS-P (-0.14), NS-RD (0.10), RD-P (0.05) and HA-RD (0.04). In meta-regression, the correlation NS-P was significantly affected by the location of the study (Asian/other) and by the gender distribution of the sample. In the HA-P correlation, the mean age of the sample affected the correlation. In conclusion, we found a medium level negative correlation between NS and HA; other correlations between the dimensions were small. These findings mainly support Cloninger's theory of independent dimensions.
Saripella, Kalyan K; Mallipeddi, Rama; Neau, Steven H
2014-11-20
Polyplasdone of different particle size was used to study the sorption, desorption, and distribution of water, and to seek evidence that larger particles can internalize water. The three samples were Polyplasdone® XL, XL-10, and INF-10. Moisture sorption and desorption isotherms at 25 °C at 5% intervals from 0 to 95% relative humidity (RH) were generated by dynamic vapor sorption analysis. The three products provided similar data, judged to be Type III with a small hysteresis that appears when RH is below 65%. An absent rounded knee in the sorption curve suggests that multilayers form before the monolayer is completed. The hysteresis indicates that internally absorbed moisture is trapped as the water is desorbed and the polymer sample shrinks, thus requiring a lower level of RH to continue desorption. The fit of the Guggenheim-Anderson-de Boer (GAB) and the Young and Nelson equations was accomplished in the data analysis. The W(m), C(G), and K values from GAB analysis are similar across the three samples, revealing 0.962 water molecules per repeating unit in the monolayer. A small amount of absorbed water is identified, but this is consistent across the three particle sizes. Copyright © 2014 Elsevier B.V. All rights reserved.
Rochelle-Newall, Emma J.; Ribolzi, Olivier; Viguier, Marion; Thammahacksa, Chanthamousone; Silvera, Norbert; Latsachack, Keooudone; Dinh, Rinh Pham; Naporn, Piyapong; Sy, Hai Tran; Soulileuth, Bounsamay; Hmaimum, Nikom; Sisouvanh, Pem; Robain, Henri; Janeau, Jean-Louis; Valentin, Christian; Boithias, Laurie; Pierret, Alain
2016-01-01
Lack of access to clean water and adequate sanitation continues to be a major brake on development. Here we present the results of a 12-month investigation into the dynamics of Escherichia coli, a commonly used indicator of faecal contamination in water supplies, in three small, rural catchments in Laos, Thailand and Vietnam. We show that land use and hydrology are major controlling factors of E. coli concentrations in streamwater and that the relative importance of these two factors varies between the dry and wet seasons. In all three catchments, the highest concentrations were observed during the wet season when storm events and overland flow were highest. However, smaller peaks of E. coli concentration were also observed during the dry season. These latter correspond to periods of intense farming activities and small, episodic rain events. Furthermore, vegetation type, through land use and soil surface crusting, combined with mammalian presence play an important role in determining E. coli loads in the streams. Finally, sampling during stormflow revealed the importance of having appropriate sampling protocols if information on maximum contamination levels is required as grab sampling at a fixed time step may miss important peaks in E. coli numbers. PMID:27604854
Viral metagenomic analysis of feces of wild small carnivores
2014-01-01
Background Recent studies have clearly demonstrated the enormous virus diversity that exists among wild animals. This exemplifies the required expansion of our knowledge of the virus diversity present in wildlife, as well as the potential transmission of these viruses to domestic animals or humans. Methods In the present study we evaluated the viral diversity of fecal samples (n = 42) collected from 10 different species of wild small carnivores inhabiting the northern part of Spain using random PCR in combination with next-generation sequencing. Samples were collected from American mink (Neovison vison), European mink (Mustela lutreola), European polecat (Mustela putorius), European pine marten (Martes martes), stone marten (Martes foina), Eurasian otter (Lutra lutra) and Eurasian badger (Meles meles) of the family of Mustelidae; common genet (Genetta genetta) of the family of Viverridae; red fox (Vulpes vulpes) of the family of Canidae and European wild cat (Felis silvestris) of the family of Felidae. Results A number of sequences of possible novel viruses or virus variants were detected, including a theilovirus, phleboviruses, an amdovirus, a kobuvirus and picobirnaviruses. Conclusions Using random PCR in combination with next generation sequencing, sequences of various novel viruses or virus variants were detected in fecal samples collected from Spanish carnivores. Detected novel viruses highlight the viral diversity that is present in fecal material of wild carnivores. PMID:24886057
Using acoustic levitation in synchrotron based laser pump hard x-ray probe experiments
NASA Astrophysics Data System (ADS)
Hu, Bin; Lerch, Jason; Suthar, Kamlesh; Dichiara, Anthony
Acoustic levitation provides a platform to trap and hold a small amount of material by using standing pressure waves without a container. The technique has a potential to be used for laser pump x-ray probe experiments; x-ray scattering and laser distortion from the container can be avoided, sample consumption can be minimized, and unwanted chemistry that may occur at the container interface can be avoided. The method has been used at synchrotron sources for studying protein and pharmaceutical solutions using x-ray diffraction (XRD) and small angle x-ray scattering (SAXS). However, pump-probe experiments require homogeneously excited samples, smaller than the absorption depth of the material that must be held stably at the intersection of both the laser and x-ray beams. We discuss 1) the role of oscillations in acoustic levitation and the optimal acoustic trapping conditions for x-ray/laser experiments, 2) opportunities to automate acoustic levitation for fast sample loading and manipulation, and 3) our experimental results using SAXS to monitor laser induced thermal expansion in gold nanoparticles solution. We also performed Finite Element Analysis to optimize the trapping performance and stability of droplets ranging from 0.4 mm to 2 mm. Our early x-ray/laser demonstrated the potential of the technique for time-resolved X-ray science.
Bohnert, Sara; Vair, Cory; Mikler, John
2010-05-15
A rapid and small volume assay to quantify HI-6 in plasma was developed to further the development and licensing of an intravenous formulation of HI-6. The objective of this method was to develop a sensitive and rapid assay that clearly resolved HI-6 and an internal standard in saline and plasma matrices. A fully validated method using ion-pair HPLC and 2-PAM as the internal standard fulfilled these requirements. Small plasma samples of 35 microL were extracted using acidification, filtration and neutralization. Linearity was shown for over 4 microg/mL to 1mg/mL with accuracy and precision within 6% relative error at the lower limit of detection. This method was utilized in the pharmacokinetic analysis HI-6 dichloride (2Cl) and HI-6 dimethane sulfonate (DMS) in anaesthetized guinea pigs and domestic swine following an intravenous bolus administration. From the resultant pharmacokinetic parameters a target plasma concentration of 100 microM was established and maintained in guinea pigs receiving an intravenous infusion. This validated method allows for the analysis of low volume samples, increased sample numbers and is applicable to the determination of pharmacokinetic profiles and parameters. Copyright (c) 2010. Published by Elsevier B.V.
Omics for Precious Rare Biosamples: Characterization of Ancient Human Hair by a Proteomic Approach.
Fresnais, Margaux; Richardin, Pascale; Sepúlveda, Marcela; Leize-Wagner, Emmanuelle; Charrié-Duhaut, Armelle
2017-07-01
Omics technologies have far-reaching applications beyond clinical medicine. A case in point is the analysis of ancient hair samples. Indeed, hair is an important biological indicator that has become a material of choice in archeometry to study the ancient civilizations and their environment. Current characterization of ancient hair is based on elemental and structural analyses, but only few studies have focused on the molecular aspects of ancient hair proteins-keratins-and their conservation state. In such cases, applied extraction protocols require large amounts of raw hair, from 30 to 100 mg. In the present study, we report an optimized new proteomic approach to accurately identify archeological hair proteins, and assess their preservation state, while using a minimum of raw material. Testing and adaptation of three protocols and of nano liquid chromatography-tandem mass spectrometry (nanoLC-MS/MS) parameters were performed on modern hair. On the basis of mass spectrometry data quality, and of the required initial sample amount, the most promising workflow was selected and applied to an ancient archeological sample, dated to about 3880 years before present. Finally, and importantly, we were able to identify 11 ancient hair proteins and to visualize the preservation state of mummy's hair from only 500 μg of raw material. The results presented here pave the way for new insights into the understanding of hair protein alteration processes such as those due to aging and ecological exposures. This work could enable omics scientists to apply a proteomic approach to precious and rare samples, not only in the context of archeometrical studies but also for future applications that would require the use of very small amounts of sample.
Mars sample return mission architectures utilizing low thrust propulsion
NASA Astrophysics Data System (ADS)
Derz, Uwe; Seboldt, Wolfgang
2012-08-01
The Mars sample return mission is a flagship mission within ESA's Aurora program and envisioned to take place in the timeframe of 2020-2025. Previous studies developed a mission architecture consisting of two elements, an orbiter and a lander, each utilizing chemical propulsion and a heavy launcher like Ariane 5 ECA. The lander transports an ascent vehicle to the surface of Mars. The orbiter performs a separate impulsive transfer to Mars, conducts a rendezvous in Mars orbit with the sample container, delivered by the ascent vehicle, and returns the samples back to Earth in a small Earth entry capsule. Because the launch of the heavy orbiter by Ariane 5 ECA makes an Earth swing by mandatory for the trans-Mars injection, its total mission time amounts to about 1460 days. The present study takes a fresh look at the subject and conducts a more general mission and system analysis of the space transportation elements including electric propulsion for the transfer. Therefore, detailed spacecraft models for orbiters, landers and ascent vehicles are developed. Based on that, trajectory calculations and optimizations of interplanetary transfers, Mars entries, descents and landings as well as Mars ascents are carried out. The results of the system analysis identified electric propulsion for the orbiter as most beneficial in terms of launch mass, leading to a reduction of launch vehicle requirements and enabling a launch by a Soyuz-Fregat into GTO. Such a sample return mission could be conducted within 1150-1250 days. Concerning the lander, a separate launch in combination with electric propulsion leads to a significant reduction of launch vehicle requirements, but also requires a large number of engines and correspondingly a large power system. Therefore, a lander performing a separate chemical transfer could possibly be more advantageous. Alternatively, a second possible mission architecture has been developed, requiring only one heavy launch vehicle (e.g., Proton). In that case the lander is transported piggyback by the electrically propelled orbiter.
Spineli, Loukia M; Jenz, Eva; Großhennig, Anika; Koch, Armin
2017-08-17
A number of papers have proposed or evaluated the delayed-start design as an alternative to the standard two-arm parallel group randomized clinical trial (RCT) design in the field of rare disease. However the discussion is felt to lack a sufficient degree of consideration devoted to the true virtues of the delayed start design and the implications either in terms of required sample-size, overall information, or interpretation of the estimate in the context of small populations. To evaluate whether there are real advantages of the delayed-start design particularly in terms of overall efficacy and sample size requirements as a proposed alternative to the standard parallel group RCT in the field of rare disease. We used a real-life example to compare the delayed-start design with the standard RCT in terms of sample size requirements. Then, based on three scenarios regarding the development of the treatment effect over time, the advantages, limitations and potential costs of the delayed-start design are discussed. We clarify that delayed-start design is not suitable for drugs that establish an immediate treatment effect, but for drugs with effects developing over time, instead. In addition, the sample size will always increase as an implication for a reduced time on placebo resulting in a decreased treatment effect. A number of papers have repeated well-known arguments to justify the delayed-start design as appropriate alternative to the standard parallel group RCT in the field of rare disease and do not discuss the specific needs of research methodology in this field. The main point is that a limited time on placebo will result in an underestimated treatment effect and, in consequence, in larger sample size requirements compared to those expected under a standard parallel-group design. This also impacts on benefit-risk assessment.
Rapid determination of 226Ra in environmental samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maxwell, Sherrod L.; Culligan, Brian K.
A new rapid method for the determination of {sup 228}Ra in natural water samples has been developed at the SRNL/EBL (Savannah River National Lab/ Environmental Bioassay Laboratory) that can be used for emergency response or routine samples. While gamma spectrometry can be employed with sufficient detection limits to determine {sup 228}Ra in solid samples (via {sup 228}Ac) , radiochemical methods that employ gas flow proportional counting techniques typically provide lower MDA (Minimal Detectable Activity) levels for the determination of {sup 228}Ra in water samples. Most radiochemical methods for {sup 228}Ra collect and purify {sup 228}Ra and allow for {sup 228}Acmore » daughter ingrowth for ~36 hours. In this new SRNL/EBL approach, {sup 228}Ac is collected and purified from the water sample without waiting to eliminate this delay. The sample preparation requires only about 4 hours so that {sup 228}Ra assay results on water samples can be achieved in < 6 hours. The method uses a rapid calcium carbonate precipitation enhanced with a small amount of phosphate added to enhance chemical yields (typically >90%), followed by rapid cation exchange removal of calcium. Lead, bismuth, uranium, thorium and protactinium isotopes are also removed by the cation exchange separation. {sup 228}Ac is eluted from the cation resin directly onto a DGA Resin cartridge attached to the bottom of the cation column to purify {sup 228}Ac. DGA Resin also removes lead and bismuth isotopes, along with Sr isotopes and {sup 90}Y. La is used to determine {sup 228}Ac chemical yield via ICP-MS, but {sup 133}Ba can also be used instead if ICP-MS assay is not available. Unlike some older methods, no lead or strontium holdback carriers or continual readjustment of sample pH is required.« less
Chatterjee, Anirban; Mirer, Paul L; Zaldivar Santamaria, Elvira; Klapperich, Catherine; Sharon, Andre; Sauer-Budge, Alexis F
2010-06-01
The life science and healthcare communities have been redefining the importance of ribonucleic acid (RNA) through the study of small molecule RNA (in RNAi/siRNA technologies), micro RNA (in cancer research and stem cell research), and mRNA (gene expression analysis for biologic drug targets). Research in this field increasingly requires efficient and high-throughput isolation techniques for RNA. Currently, several commercial kits are available for isolating RNA from cells. Although the quality and quantity of RNA yielded from these kits is sufficiently good for many purposes, limitations exist in terms of extraction efficiency from small cell populations and the ability to automate the extraction process. Traditionally, automating a process decreases the cost and personnel time while simultaneously increasing the throughput and reproducibility. As the RNA field matures, new methods for automating its extraction, especially from low cell numbers and in high throughput, are needed to achieve these improvements. The technology presented in this article is a step toward this goal. The method is based on a solid-phase extraction technology using a porous polymer monolith (PPM). A novel cell lysis approach and a larger binding surface throughout the PPM extraction column ensure a high yield from small starting samples, increasing sensitivity and reducing indirect costs in cell culture and sample storage. The method ensures a fast and simple procedure for RNA isolation from eukaryotic cells, with a high yield both in terms of quality and quantity. The technique is amenable to automation and streamlined workflow integration, with possible miniaturization of the sample handling process making it suitable for high-throughput applications.
Calcium kinetics with microgram stable isotope doses and saliva sampling
NASA Technical Reports Server (NTRS)
Smith, S. M.; Wastney, M. E.; Nyquist, L. E.; Shih, C. Y.; Wiesmann, H.; Nillen, J. L.; Lane, H. W.
1996-01-01
Studies of calcium kinetics require administration of tracer doses of calcium and subsequent repeated sampling of biological fluids. This study was designed to develop techniques that would allow estimation of calcium kinetics by using small (micrograms) doses of isotopes instead of the more common large (mg) doses to minimize tracer perturbation of the system and reduce cost, and to explore the use of saliva sampling as an alternative to blood sampling. Subjects received an oral dose (133 micrograms) of 43Ca and an i.v. dose (7.7 micrograms) of 46Ca. Isotopic enrichment in blood, urine, saliva and feces was well above thermal ionization mass spectrometry measurement precision up to 170 h after dosing. Fractional calcium absorptions determined from isotopic ratios in blood, urine and saliva were similar. Compartmental modeling revealed that kinetic parameters determined from serum or saliva data were similar, decreasing the necessity for blood samples. It is concluded from these results that calcium kinetics can be assessed with micrograms doses of stable isotopes, thereby reducing tracer costs and with saliva samples, thereby reducing the amount of blood needed.
NASA Astrophysics Data System (ADS)
Burritt, Rosemary; Francois, Elizabeth; Windler, Gary; Chavez, David
2017-06-01
Diaminoazoxyfurazan (DAAF) has many of the safety characteristics of an insensitive high explosive (IHE): it is extremely insensitive to impact and friction and is comparable to triaminotrinitrobezene (TATB) in this way. Conversely, it demonstrates many performance characteristics of a Conventional High Explosive (CHE). DAAF has a small failure diameter of about 1.25 mm and can be sensitive to shock under the right conditions. Large particle sized DAAF will not initiate in a typical exploding foil initiator (EFI) configuration but smaller particle sizes will. Large particle sized DAAF, of 40 μm, was crash precipitated and ball milled into six distinct samples and pressed into pellets with a density of 1.60 g/cc (91% TMD). To investigate the effect of particle size and surface area on the direct initiation on DAAF multiple threshold tests were preformed on each sample of DAAF in different EFI configurations, which varied in flyer thickness and/or bridge size. Comparative tests were performed examining threshold voltage and correlated to Photon Doppler Velocimetry (PDV) results. The samples with larger particle sizes and surface area required more energy to initiate while the smaller particle sizes required less energy and could be initiated with smaller diameter flyers.
Cybersonics: Tapping into Technology
NASA Technical Reports Server (NTRS)
2001-01-01
With the assistance of Small Business Innovation Research (SBIR) funding from NASA's Jet Propulsion Laboratory, Cybersonics, Inc., developed an ultrasonic drill with applications ranging from the medical industry to space exploration. The drill, which has the ability to take a core sample of the hardest granite or perform the most delicate diagnostic medical procedure, is a lightweight, ultrasonic device made to fit in the palm of the hand. Piezoelectric actuators, which have only two moving parts and no gears or motors, drive the components of the device, enabling it to operate in a wide range of temperatures. The most remarkable aspect of the drill is its ability to penetrate even the hardest rock with minimal force application. The ultrasonic device requires 20 to 30 times less force than standard rotating drills, allowing it to be safely guided by hand during operation. Also, the drill is operable at a level as low as three watts of power, where conventional drills require more than three times this level. Potential future applications for the ultrasonic drill include rock and soil sampling, medical procedures that involve core sampling or probing, landmine detection, building and construction, and space exploration. Cybersonics, Inc. developed an ultrasonic drill with applications ranging from the medical industry to space exploration.
Time-resolved optical absorption microspectroscopy of magnetic field sensitive flavin photochemistry
NASA Astrophysics Data System (ADS)
Antill, Lewis M.; Beardmore, Joshua P.; Woodward, Jonathan R.
2018-02-01
The photochemical reactions of blue-light receptor proteins have received much attention due to their very important biological functions. In addition, there is also growing evidence that the one particular class of such proteins, the cryptochromes, may be associated with not only a biological photo-response but also a magneto-response, which may be responsible for the mechanism by which many animals can respond to the weak geomagnetic field. Therefore, there is an important scientific question over whether it is possible to directly observe such photochemical processes, and indeed the effects of weak magnetic fields thereon, taking place both in purified protein samples in vitro and in actual biochemical cells and tissues. For the former samples, the key lies in being able to make sensitive spectroscopic measurements on very small volumes of samples at potentially low protein concentrations, while the latter requires, in addition, spatially resolved measurements on length scales smaller than typical cellular components, i.e., sub-micron resolution. In this work, we discuss a two- and three-color confocal pump-probe microscopic approach to this question which satisfies these requirements and is thus useful for experimental measurements in both cases.
Rodriguez-Valadez, Francisco; Ortiz-Exiga, Carlos; Ibanez, Jorge G; Alatorre-Ordaz, Alejandro; Gutierrez-Granados, Silvia
2005-03-15
The reduction of Cr(VI) to Cr(III) is achieved in a flow-by, parallel-plate reactor equipped with reticulated vitreous carbon (RVC) electrodes;this reduction can be accomplished by the application of relatively small potentials. Treatment of synthetic samples and field samples (from an electrodeposition plant) results in final Cr(VI) concentrations of 0.1 mg/L (i.e., the detection limit of the UV-vis characterization technique used here) in 25 and 43 min, respectively. Such concentrations comply with typical environmental legislation for wastewaters that regulate industrial effluents (at presenttime = 0.5 mg/L for discharges). The results show the influence of the applied potential, pH, electrode porosity, volumetric flow, and solution concentration on the Cr(VI) reduction percentage and on the required electrolysis time. Values for the mass transfer coefficient and current efficiencies are also obtained. Although current efficiencies are not high, the fast kinetics observed make this proposed treatment an appealing alternative. The lower current efficiency obtained in the case of a field sample is attributed to electrochemical activation of impurities. The required times for the reduction of Cr(VI) are significantly lower than those reported elsewhere.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-11
..., the Small Business Act and SBA's regulation do not contain performance requirements applicable to.... 121.406 that the nonmanufacturer rule applies to women-owned small business (WOSB) and economically disadvantaged women-owned small business (EDSOB) requirements for supplies. Again, Sec. 127.505 of SBA's...
Sayers, Adrian; Crowther, Michael J; Judge, Andrew; Whitehouse, Michael R; Blom, Ashley W
2017-08-28
The use of benchmarks to assess the performance of implants such as those used in arthroplasty surgery is a widespread practice. It provides surgeons, patients and regulatory authorities with the reassurance that implants used are safe and effective. However, it is not currently clear how or how many implants should be statistically compared with a benchmark to assess whether or not that implant is superior, equivalent, non-inferior or inferior to the performance benchmark of interest.We aim to describe the methods and sample size required to conduct a one-sample non-inferiority study of a medical device for the purposes of benchmarking. Simulation study. Simulation study of a national register of medical devices. We simulated data, with and without a non-informative competing risk, to represent an arthroplasty population and describe three methods of analysis (z-test, 1-Kaplan-Meier and competing risks) commonly used in surgical research. We evaluate the performance of each method using power, bias, root-mean-square error, coverage and CI width. 1-Kaplan-Meier provides an unbiased estimate of implant net failure, which can be used to assess if a surgical device is non-inferior to an external benchmark. Small non-inferiority margins require significantly more individuals to be at risk compared with current benchmarking standards. A non-inferiority testing paradigm provides a useful framework for determining if an implant meets the required performance defined by an external benchmark. Current contemporary benchmarking standards have limited power to detect non-inferiority, and substantially larger samples sizes, in excess of 3200 procedures, are required to achieve a power greater than 60%. It is clear when benchmarking implant performance, net failure estimated using 1-KM is preferential to crude failure estimated by competing risk models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
78 FR 21002 - Reporting and Recordkeeping Requirements Under OMB Review
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-08
... SMALL BUSINESS ADMINISTRATION Reporting and Recordkeeping Requirements Under OMB Review AGENCY: Small Business Administration. ACTION: Notice of 30 day Reporting Requirements Submitted for OMB Review... notice to: Agency Clearance Officer, Curtis Rich, Small Business Administration, 409 3rd Street SW., 5th...
78 FR 76378 - Reporting and Recordkeeping Requirements Under OMB Review
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-17
... SMALL BUSINESS ADMINISTRATION Reporting and Recordkeeping Requirements Under OMB Review AGENCY: Small Business Administration. ACTION: Notice of 30 day reporting requirements submitted for OMB review... notice to: Agency Clearance Officer, Curtis Rich, Small Business Administration, 409 3rd Street SW., 5th...
78 FR 76886 - Reporting and Recordkeeping Requirements Under OMB Review
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-19
... SMALL BUSINESS ADMINISTRATION Reporting and Recordkeeping Requirements Under OMB Review AGENCY: Small Business Administration. ACTION: Notice of 30 day reporting requirements submitted for OMB review... notice to: Agency Clearance Officer, Curtis Rich, Small Business Administration, 409 3rd Street SW., 5th...
14 CFR 1251.108 - Administrative requirements for small recipients.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Administrative requirements for small recipients. 1251.108 Section 1251.108 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION NONDISCRIMINATION ON BASIS OF HANDICAP General Provisions § 1251.108 Administrative requirements for small...
14 CFR 1251.108 - Administrative requirements for small recipients.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Administrative requirements for small recipients. 1251.108 Section 1251.108 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION NONDISCRIMINATION ON BASIS OF HANDICAP General Provisions § 1251.108 Administrative requirements for small...
14 CFR 1251.108 - Administrative requirements for small recipients.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 5 2013-01-01 2013-01-01 false Administrative requirements for small recipients. 1251.108 Section 1251.108 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION NONDISCRIMINATION ON BASIS OF HANDICAP General Provisions § 1251.108 Administrative requirements for small...
14 CFR 1251.108 - Administrative requirements for small recipients.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 5 2012-01-01 2012-01-01 false Administrative requirements for small recipients. 1251.108 Section 1251.108 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION NONDISCRIMINATION ON BASIS OF HANDICAP General Provisions § 1251.108 Administrative requirements for small...
78 FR 45592 - Reporting and Recordkeeping Requirements Under OMB Review
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-29
... SMALL BUSINESS ADMINISTRATION Reporting and Recordkeeping Requirements Under OMB Review AGENCY: Small Business Administration. ACTION: Notice of 30 day reporting requirements submitted for OMB review... notice to: Agency Clearance Officer, Curtis Rich, Small Business Administration, 409 3rd Street SW., 5th...
78 FR 65744 - Reporting and Recordkeeping Requirements Under OMB Review
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-01
... SMALL BUSINESS ADMINISTRATION Reporting and Recordkeeping Requirements Under OMB Review AGENCY: Small Business Administration. ACTION: Notice of 30 day Reporting Requirements Submitted for OMB Review... notice to: Agency Clearance Officer, Curtis Rich, Small Business Administration, 409 3rd Street SW., 5th...
78 FR 39820 - Reporting and Recordkeeping Requirements Under OMB Review
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-02
... SMALL BUSINESS ADMINISTRATION Reporting and Recordkeeping Requirements Under OMB Review AGENCY: Small Business Administration. ACTION: Notice of 30 day Reporting Requirements Submitted for OMB Review... notice to: Agency Clearance Officer, Curtis Rich, Small Business Administration, 409 3rd Street SW., 5th...
Simplifications in analyzing positron emission tomography data: effects on outcome measures.
Logan, Jean; Alexoff, David; Kriplani, Aarti
2007-10-01
Initial validation studies of new radiotracers generally involve kinetic models that require a measured arterial input function. This allows for the separation of tissue binding from delivery and blood flow effects. However, when using a tracer in a clinical setting, it is necessary to eliminate arterial blood sampling due to its invasiveness and the extra burden of counting and analyzing the blood samples for metabolites. In some cases, it may also be necessary to replace dynamic scanning with a shortened scanning period some time after tracer injection, as is done with FDG (F-18 fluorodeoxyglucose). These approximations represent loss of information. In this work, we considered several questions related to this: (1) Do differences in experimental conditions (drug treatments) or populations affect the input function, and what effect, if any, does this have on the final outcome measure? (2) How do errors in metabolite measurements enter into results? (3) What errors are incurred if the uptake ratio is used in place of the distribution volume ratio? (4) Is one- or two-point blood sampling any better for FDG data than the standardized uptake value? and (5) If blood sampling is necessary, what alternatives are there to arterial blood sampling? The first three questions were considered in terms of data from human dynamic positron emission tomography (PET) studies under conditions of baseline and drug pretreatment. Data from [11C]raclopride studies and those from the norepinephrine transporter tracer (S,S)-[11C]O-methyl reboxetine were used. Calculation of a metabolic rate for FDG using the operational equation requires a measured input function. We tested a procedure based on two blood samples to estimate the plasma integral and convolution that occur in the operational equation. There are some tracers for which blood sampling is necessary. Strategies for brain studies involve using the internal carotids in estimating the radioactivity after correcting for partial volume and spillover in order to eliminate arterial sampling. Some venous blood samples are still required for metabolite measurements. The ultimate solution to the problem of arterial sampling may be a wrist scanner, which acts as a small PET camera for imaging the arteries in the wrist. This is currently under development.
Huth, John K.; Silvis, Alexander; Moosman, Paul R.; Ford, W. Mark; Sweeten, Sara E.
2015-01-01
Many aspects of foraging and roosting habitat of Myotis leibii (Eastern Small-Footed Bat), an emergent rock roosting-obligate, are poorly described. Previous comparisons of effectiveness of acoustic sampling and mist-net captures have not included Eastern Small-Footed Bat. Habitat requirements of this species differ from congeners in the region, and it is unclear whether survey protocols developed for other species are applicable. Using data from three overlapping studies at two sampling sites in western Virginia’s central Appalachian Mountains, detection probabilities were examined for three survey methods (acoustic surveys with automated identification of calls, visual searches of rock crevices, and mist-netting) for use in the development of “best practices” for future surveys and monitoring. Observer effects were investigated using an expanded version of visual search data. Results suggested that acoustic surveys with automated call identification are not effective for documenting presence of Eastern Small-Footed Bats on talus slopes (basal detection rate of 0%) even when the species is known to be present. The broadband, high frequency echolocation calls emitted by Eastern Small-Footed Bat may be prone to attenuation by virtue of their high frequencies, and these factors, along with signal reflection, lower echolocation rates or possible misidentification to other bat species over talus slopes may all have contributed to poor acoustic survey success. Visual searches and mist-netting of emergent rock had basal detection probabilities of 91% and 75%, respectively. Success of visual searches varied among observers, but detection probability improved with practice. Additionally, visual searches were considerably more economical than mist-netting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leishear, R.; Thaxton, D.; Minichan, R.
A sampling tool was required to evaluate residual activity ({mu}Curies per square foot) on the inner wall surfaces of underground nuclear waste storage tanks. The tool was required to collect a small sample from the 3/8 inch thick tank walls. This paper documents the design, testing, and deployment of the remotely operated sampling device. The sampler provides material from a known surface area to estimate the overall surface contamination in the tank prior to closure. The sampler consisted of a sampler and mast assembly mast assembly, control system, and the sampler, or end effector, which is defined as the operatingmore » component of a robotic arm. The mast assembly consisted of a vertical 30 feet long, 3 inch by 3 inch, vertical steel mast and a cantilevered arm hinged at the bottom of the mast and lowered by cable to align the attached sampler to the wall. The sampler and mast assembly were raised and lowered through an opening in the tank tops, called a riser. The sampler is constructed of a mounting plate, a drill, springs to provide a drive force to the drill, a removable sampler head to collect the sample, a vacuum pump to draw the sample from the drill to a filter, and controls to operate the system. Once the sampler was positioned near the wall, electromagnets attached it to the wall, and the control system was operated to turn on the drill and vacuum to remove and collect a sample from the wall. Samples were collected on filters in removable sampler heads, which were readily transported for further laboratory testing.« less
Holzner, Gregor; Kriel, Frederik Hermanus; Priest, Craig
2015-05-05
The goal of most analytical techniques is to reduce the lower limit of detection; however, it is sometimes necessary to do the opposite. High sample concentrations or samples with high molar absorptivity (e.g., dyes and metal complexes) often require multiple dilution steps or laborious sample preparation prior to spectroscopic analysis. Here, we demonstrate dilution-free, one-step UV-vis spectroscopic analysis of high concentrations of platinum(IV) hexachloride in a micropillar array, that is, "pillar cuvette". The cuvette is spontaneously filled by wicking of the liquid sample into the micropillar array. The pillar height (thus, the film thickness) defines the optical path length, which was reduced to between 10 and 20 μm in this study (3 orders of magnitude smaller than in a typical cuvette). Only one small droplet (∼2 μL) of sample is required, and the dispensed volume need not be precise or even known to the analyst for accurate spectroscopy measurements. For opaque pillars, we show that absorbance is linearly related to platinum concentration (the Beer-Lambert Law). For fully transparent or semitransparent pillars, the measured absorbance was successfully corrected for the fractional surface coverage of the pillars and the transmittance of the pillars and reference. Thus, both opaque and transparent pillars can be applied to absorbance spectroscopy of high absorptivity, microliter samples. It is also shown here that the pillar array has a useful secondary function as an integrated (in-cuvette) filter for particulates. For pillar cuvette measurements of platinum solutions spiked with 6 μm diameter polystyrene spheres, filtered and unfiltered samples gave identical spectra.
Orth, Patrick; Zurakowski, David; Alini, Mauro; Cucchiarini, Magali
2013-01-01
Advanced tissue engineering approaches for articular cartilage repair in the knee joint rely on translational animal models. In these investigations, cartilage defects may be established either in one joint (unilateral design) or in both joints of the same animal (bilateral design). We hypothesized that a lower intraindividual variability following the bilateral strategy would reduce the number of required joints. Standardized osteochondral defects were created in the trochlear groove of 18 rabbits. In 12 animals, defects were produced unilaterally (unilateral design; n=12 defects), while defects were created bilaterally in 6 animals (bilateral design; n=12 defects). After 3 weeks, osteochondral repair was evaluated histologically applying an established grading system. Based on intra- and interindividual variabilities, required sample sizes for the detection of discrete differences in the histological score were determined for both study designs (α=0.05, β=0.20). Coefficients of variation (%CV) of the total histological score values were 1.9-fold increased following the unilateral design when compared with the bilateral approach (26 versus 14%CV). The resulting numbers of joints needed to treat were always higher for the unilateral design, resulting in an up to 3.9-fold increase in the required number of experimental animals. This effect was most pronounced for the detection of small-effect sizes and estimating large standard deviations. The data underline the possible benefit of bilateral study designs for the decrease of sample size requirements for certain investigations in articular cartilage research. These findings might also be transferred to other scoring systems, defect types, or translational animal models in the field of cartilage tissue engineering. PMID:23510128
Effect of Common Cryoprotectants on Critical Warming Rates and Ice Formation in Aqueous Solutions
Hopkins, Jesse B.; Badeau, Ryan; Warkentin, Matthew; Thorne, Robert E.
2012-01-01
Ice formation on warming is of comparable or greater importance to ice formation on cooling in determining survival of cryopreserved samples. Critical warming rates required for ice-free warming of vitrified aqueous solutions of glycerol, dimethyl sulfoxide, ethylene glycol, polyethylene glycol 200 and sucrose have been measured for warming rates of order 10 to 104 K/s. Critical warming rates are typically one to three orders of magnitude larger than critical cooling rates. Warming rates vary strongly with cooling rates, perhaps due to the presence of small ice fractions in nominally vitrified samples. Critical warming and cooling rate data spanning orders of magnitude in rates provide rigorous tests of ice nucleation and growth models and their assumed input parameters. Current models with current best estimates for input parameters provide a reasonable account of critical warming rates for glycerol solutions at high concentrations/low rates, but overestimate both critical warming and cooling rates by orders of magnitude at lower concentrations and larger rates. In vitrification protocols, minimizing concentrations of potentially damaging cryoprotectants while minimizing ice formation will require ultrafast warming rates, as well as fast cooling rates to minimize the required warming rates. PMID:22728046
Minimal requirements for the molecular testing of lung cancer.
Popper, Helmut H; Tímár, József; Ryska, Ales; Olszewski, Wlodzimierz
2014-10-01
From the aspect of the contemporary pathologic diagnostics of lung cancer, it is a key issue of the tissue obtained since small biopsies and cytology still play a major role. In the non-small cell lung cancer era, cytology considered equal to biopsy. However, in recent years it is unable to provide quality diagnosis and must be replaced by biopsy. Various molecular techniques can handle various different tissue samples which must be considered during molecular pathology diagnosis. Besides, tumor cell-normal cell ratio in the obtained tissue as well as the absolute tumor cell number have great significance whose information must be provided in the primary lung cancer diagnosis. Last but not least, for continuous sustainable molecular diagnostics of lung cancer rational algorythms, affordable technology and appropriate reimbursement are equally necessary.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 26 Internal Revenue 18 2012-04-01 2012-04-01 false Required use of magnetic media for returns of... Information and Returns Returns and Records § 301.6037-2 Required use of magnetic media for returns of... magnetic media. An electing small business corporation required to file an electing small business return...
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 18 2010-04-01 2010-04-01 false Required use of magnetic media for returns of... Information and Returns Returns and Records § 301.6037-2 Required use of magnetic media for returns of... magnetic media. An electing small business corporation required to file an electing small business return...