Electrofishing Effort Required to Estimate Biotic Condition in Southern Idaho Rivers
An important issue surrounding biomonitoring in large rivers is the minimum sampling effort required to collect an adequate number of fish for accurate and precise determinations of biotic condition. During the summer of 2002, we sampled 15 randomly selected large-river sites in...
Electrofishing effort requirements for estimating species richness in the Kootenai River, Idaho
Watkins, Carson J.; Quist, Michael C.; Shepard, Bradley B.; Ireland, Susan C.
2016-01-01
This study was conducted on the Kootenai River, Idaho to provide insight on sampling requirements to optimize future monitoring effort associated with the response of fish assemblages to habitat rehabilitation. Our objective was to define the electrofishing effort (m) needed to have a 95% probability of sampling 50, 75, and 100% of the observed species richness and to evaluate the relative influence of depth, velocity, and instream woody cover on sample size requirements. Sidechannel habitats required more sampling effort to achieve 75 and 100% of the total species richness than main-channel habitats. The sampling effort required to have a 95% probability of sampling 100% of the species richness was 1100 m for main-channel sites and 1400 m for side-channel sites. We hypothesized that the difference in sampling requirements between main- and side-channel habitats was largely due to differences in habitat characteristics and species richness between main- and side-channel habitats. In general, main-channel habitats had lower species richness than side-channel habitats. Habitat characteristics (i.e., depth, current velocity, and woody instream cover) were not related to sample size requirements. Our guidelines will improve sampling efficiency during monitoring effort in the Kootenai River and provide insight on sampling designs for other large western river systems where electrofishing is used to assess fish assemblages.
HIGH VOLUME INJECTION FOR GCMS ANALYSIS OF PARTICULATE ORGANIC SPECIES IN AMBIENT AIR
Detection of organic species in ambient particulate matter typically requires large air sample volumes, frequently achieved by grouping samples into monthly composites. Decreasing the volume of air sample required would allow shorter collection times and more convenient sample c...
ERIC Educational Resources Information Center
Wolf, Erika J.; Harrington, Kelly M.; Clark, Shaunna L.; Miller, Mark W.
2013-01-01
Determining sample size requirements for structural equation modeling (SEM) is a challenge often faced by investigators, peer reviewers, and grant writers. Recent years have seen a large increase in SEMs in the behavioral science literature, but consideration of sample size requirements for applied SEMs often relies on outdated rules-of-thumb.…
Approximate sample sizes required to estimate length distributions
Miranda, L.E.
2007-01-01
The sample sizes required to estimate fish length were determined by bootstrapping from reference length distributions. Depending on population characteristics and species-specific maximum lengths, 1-cm length-frequency histograms required 375-1,200 fish to estimate within 10% with 80% confidence, 2.5-cm histograms required 150-425 fish, proportional stock density required 75-140 fish, and mean length required 75-160 fish. In general, smaller species, smaller populations, populations with higher mortality, and simpler length statistics required fewer samples. Indices that require low sample sizes may be suitable for monitoring population status, and when large changes in length are evident, additional sampling effort may be allocated to more precisely define length status with more informative estimators. ?? Copyright by the American Fisheries Society 2007.
Low-cost floating emergence net and bottle trap: Comparison of two designs
Cadmus, Pete; Pomeranz, Justin; Kraus, Johanna M.
2016-01-01
Sampling emergent aquatic insects is of interest to many freshwater ecologists. Many quantitative emergence traps require the use of aspiration for collection. However, aspiration is infeasible in studies with large amounts of replication that is often required in large biomonitoring projects. We designed an economic, collapsible pyramid-shaped floating emergence trap with an external collection bottle that avoids the need for aspiration. This design was compared experimentally to a design of similar dimensions that relied on aspiration to ensure comparable results. The pyramid-shaped design captured twice as many total emerging insects. When a preservative was used in bottle collectors, >95% of the emergent abundance was collected in the bottle. When no preservative was used, >81% of the total insects were collected from the bottle. In addition to capturing fewer emergent insects, the traps that required aspiration took significantly longer to sample. Large studies and studies sampling remote locations could benefit from the economical construction, speed of sampling, and capture efficiency.
USE OF DISPOSABLE DIAPERS TO COLLECT URINE IN EXPOSURE STUDIES
Large studies of children's health as it relates to exposures to chemicals in the environment often require measurements of biomarkers of chemical exposures or effects in urine samples. But collection of urine samples from infants and toddlers is difficult. For large exposure s...
Dahling, Daniel R
2002-01-01
Large-scale virus studies of groundwater systems require practical and sensitive procedures for both sample processing and viral assay. Filter adsorption-elution procedures have traditionally been used to process large-volume water samples for viruses. In this study, five filter elution procedures using cartridge filters were evaluated for their effectiveness in processing samples. Of the five procedures tested, the third method, which incorporated two separate beef extract elutions (one being an overnight filter immersion in beef extract), recovered 95% of seeded poliovirus compared with recoveries of 36 to 70% for the other methods. For viral enumeration, an expanded roller bottle quantal assay was evaluated using seeded poliovirus. This cytopathic-based method was considerably more sensitive than the standard plaque assay method. The roller bottle system was more economical than the plaque assay for the evaluation of comparable samples. Using roller bottles required less time and manipulation than the plaque procedure and greatly facilitated the examination of large numbers of samples. The combination of the improved filter elution procedure and the roller bottle assay for viral analysis makes large-scale virus studies of groundwater systems practical. This procedure was subsequently field tested during a groundwater study in which large-volume samples (exceeding 800 L) were processed through the filters.
Estimation of the rain signal in the presence of large surface clutter
NASA Technical Reports Server (NTRS)
Ahamad, Atiq; Moore, Richard K.
1994-01-01
The principal limitation for the use of a spaceborne imaging SAR as a rain radar is the surface-clutter problem. Signals may be estimated in the presence of noise by averaging large numbers of independent samples. This method was applied to obtain an estimate of the rain echo by averaging a set of N(sub c) samples of the clutter in a separate measurement and subtracting the clutter estimate from the combined estimate. The number of samples required for successful estimation (within 10-20%) for off-vertical angles of incidence appears to be prohibitively large. However, by appropriately degrading the resolution in both range and azimuth, the required number of samples can be obtained. For vertical incidence, the number of samples required for successful estimation is reasonable. In estimating the clutter it was assumed that the surface echo is the same outside the rain volume as it is within the rain volume. This may be true for the forest echo, but for convective storms over the ocean the surface echo outside the rain volume is very different from that within. It is suggested that the experiment be performed with vertical incidence over forest to overcome this limitation.
Hakjun Rhee; Randy B. Foltz; James L. Fridley; Finn Krogstad; Deborah S. Page-Dumroese
2014-01-01
Measurement of particle-size distribution (PSD) of soil with large-sized particles (e.g., 25.4 mm diameter) requires a large sample and numerous particle-size analyses (PSAs). A new method is needed that would reduce time, effort, and cost for PSAs of the soil and aggregate material with large-sized particles. We evaluated a nested method for sampling and PSA by...
Novikov, I; Fund, N; Freedman, L S
2010-01-15
Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.
How Big Is Big Enough? Sample Size Requirements for CAST Item Parameter Estimation
ERIC Educational Resources Information Center
Chuah, Siang Chee; Drasgow, Fritz; Luecht, Richard
2006-01-01
Adaptive tests offer the advantages of reduced test length and increased accuracy in ability estimation. However, adaptive tests require large pools of precalibrated items. This study looks at the development of an item pool for 1 type of adaptive administration: the computer-adaptive sequential test. An important issue is the sample size required…
Procedures and equipment for staining large numbers of plant root samples for endomycorrhizal assay.
Kormanik, P P; Bryan, W C; Schultz, R C
1980-04-01
A simplified method of clearing and staining large numbers of plant roots for vesicular-arbuscular (VA) mycorrhizal assay is presented. Equipment needed for handling multiple samples is described, and two formulations for the different chemical solutions are presented. Because one formulation contains phenol, its use should be limited to basic studies for which adequate laboratory exhaust hoods are available and great clarity of fungal structures is required. The second staining formulation, utilizing lactic acid instead of phenol, is less toxic, requires less elaborate laboratory facilities, and has proven to be completely satisfactory for VA assays.
ERIC Educational Resources Information Center
Fiedler, Klaus; Kareev, Yaakov
2006-01-01
Adaptive decision making requires that contingencies between decision options and their relative assets be assessed accurately and quickly. The present research addresses the challenging notion that contingencies may be more visible from small than from large samples of observations. An algorithmic account for such a seemingly paradoxical effect…
Design of Phase II Non-inferiority Trials.
Jung, Sin-Ho
2017-09-01
With the development of inexpensive treatment regimens and less invasive surgical procedures, we are confronted with non-inferiority study objectives. A non-inferiority phase III trial requires a roughly four times larger sample size than that of a similar standard superiority trial. Because of the large required sample size, we often face feasibility issues to open a non-inferiority trial. Furthermore, due to lack of phase II non-inferiority trial design methods, we do not have an opportunity to investigate the efficacy of the experimental therapy through a phase II trial. As a result, we often fail to open a non-inferiority phase III trial and a large number of non-inferiority clinical questions still remain unanswered. In this paper, we want to develop some designs for non-inferiority randomized phase II trials with feasible sample sizes. At first, we review a design method for non-inferiority phase III trials. Subsequently, we propose three different designs for non-inferiority phase II trials that can be used under different settings. Each method is demonstrated with examples. Each of the proposed design methods is shown to require a reasonable sample size for non-inferiority phase II trials. The three different non-inferiority phase II trial designs are used under different settings, but require similar sample sizes that are typical for phase II trials.
Accuracy assessment with complex sampling designs
Raymond L. Czaplewski
2010-01-01
A reliable accuracy assessment of remotely sensed geospatial data requires a sufficiently large probability sample of expensive reference data. Complex sampling designs reduce cost or increase precision, especially with regional, continental and global projects. The General Restriction (GR) Estimator and the Recursive Restriction (RR) Estimator separate a complex...
Goeman, Valerie R; Tinkler, Stacy H; Hammac, G Kenitra; Ruple, Audrey
2018-04-01
Environmental surveillance for Salmonella enterica can be used for early detection of contamination; thus routine sampling is an integral component of infection control programs in hospital environments. At the Purdue University Veterinary Teaching Hospital (PUVTH), the technique regularly employed in the large animal hospital for sample collection uses sterile gauze sponges for environmental sampling, which has proven labor-intensive and time-consuming. Alternative sampling methods use Swiffer brand electrostatic wipes for environmental sample collection, which are reportedly effective and efficient. It was hypothesized that use of Swiffer wipes for sample collection would be more efficient and less costly than the use of gauze sponges. A head-to-head comparison between the 2 sampling methods was conducted in the PUVTH large animal hospital and relative agreement, cost-effectiveness, and sampling efficiency were compared. There was fair agreement in culture results between the 2 sampling methods, but Swiffer wipes required less time and less physical effort to collect samples and were more cost-effective.
Otero, Jorge; Guerrero, Hector; Gonzalez, Laura; Puig-Vidal, Manel
2012-01-01
The time required to image large samples is an important limiting factor in SPM-based systems. In multiprobe setups, especially when working with biological samples, this drawback can make impossible to conduct certain experiments. In this work, we present a feedfordward controller based on bang-bang and adaptive controls. The controls are based in the difference between the maximum speeds that can be used for imaging depending on the flatness of the sample zone. Topographic images of Escherichia coli bacteria samples were acquired using the implemented controllers. Results show that to go faster in the flat zones, rather than using a constant scanning speed for the whole image, speeds up the imaging process of large samples by up to a 4× factor. PMID:22368491
Classical boson sampling algorithms with superior performance to near-term experiments
NASA Astrophysics Data System (ADS)
Neville, Alex; Sparrow, Chris; Clifford, Raphaël; Johnston, Eric; Birchall, Patrick M.; Montanaro, Ashley; Laing, Anthony
2017-12-01
It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of linear optics, which has sparked interest as a rapid way to demonstrate such quantum supremacy. Photon statistics are governed by intractable matrix functions, which suggests that sampling from the distribution obtained by injecting photons into a linear optical network could be solved more quickly by a photonic experiment than by a classical computer. The apparently low resource requirements for large boson sampling experiments have raised expectations of a near-term demonstration of quantum supremacy by boson sampling. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. Our classical algorithm, based on Metropolised independence sampling, allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. Compared to current experiments, a demonstration of quantum supremacy over a successful implementation of these classical methods on a supercomputer would require the number of photons and experimental components to increase by orders of magnitude, while tackling exponentially scaling photon loss.
New biosensors for food safety screening solutions
NASA Astrophysics Data System (ADS)
Dyer, Maureen A.; Oberholtzer, Jennifer A.; Mulligan, David C.; Hanson, William P.
2009-05-01
Hanson Technologies has developed the automated OmniFresh 1000 system to sample large volumes of produce wash water, collect the pathogens, and detect their presence. By collecting a continuous sidestream of wash water, the OmniFresh uses a sample that represent the entire lot of produce being washed. The OmniFresh does not require bacterial culture or enrichment, and it detects both live and dead bacteria in the collected sample using an in-line sensor. Detection occurs in an array biosensor capable of handling large samples with complex matrices. Additionally, sample can be sent for traditional confirming tests after the screening performed by the OmniFresh.
Measuring discharge with ADCPs: Inferences from synthetic velocity profiles
Rehmann, C.R.; Mueller, D.S.; Oberg, K.A.
2009-01-01
Synthetic velocity profiles are used to determine guidelines for sampling discharge with acoustic Doppler current profilers (ADCPs). The analysis allows the effects of instrument characteristics, sampling parameters, and properties of the flow to be studied systematically. For mid-section measurements, the averaging time required for a single profile measurement always exceeded the 40 s usually recommended for velocity measurements, and it increased with increasing sample interval and increasing time scale of the large eddies. Similarly, simulations of transect measurements show that discharge error decreases as the number of large eddies sampled increases. The simulations allow sampling criteria that account for the physics of the flow to be developed. ?? 2009 ASCE.
A standard sampling protocol to assess the fish assemblages and abundances in large, coldwater rivers is most accurate and precise if consistent gears and levels of effort are used at each site. This requires thorough crew training, quality control audits, and replicate sampling...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dragone, A; /SLAC; Pratte, J.F.
An ASIC for the readout of signals from X-ray Active Matrix Pixel Sensor (XAMPS) detectors to be used at the Linac Coherent Light Source (LCLS) is presented. The X-ray Pump Probe (XPP) instrument, for which the ASIC has been designed, requires a large input dynamic range on the order of 104 photons at 8 keV with a resolution of half a photon FWHM. Due to the size of the pixel and the length of the readout line, large input capacitance is expected, leading to stringent requirement on the noise optimization. Furthermore, the large number of pixels needed for a goodmore » position resolution and the fixed LCLS beam period impose limitations on the time available for the single pixel readout. Considering the periodic nature of the LCLS beam, the ASIC developed for this application is a time-variant system providing low-noise charge integration, filtering and correlated double sampling. In order to cope with the large input dynamic range a charge pump scheme implementing a zero-balance measurement method has been introduced. It provides an on chip 3-bit coarse digital conversion of the integrated charge. The residual charge is sampled using correlated double sampling into analog memory and measured with the required resolution. The first 64 channel prototype of the ASIC has been fabricated in TSMC CMOS 0.25 {micro}m technology. In this paper, the ASIC architecture and performances are presented.« less
Applying Active Learning to Assertion Classification of Concepts in Clinical Text
Chen, Yukun; Mani, Subramani; Xu, Hua
2012-01-01
Supervised machine learning methods for clinical natural language processing (NLP) research require a large number of annotated samples, which are very expensive to build because of the involvement of physicians. Active learning, an approach that actively samples from a large pool, provides an alternative solution. Its major goal in classification is to reduce the annotation effort while maintaining the quality of the predictive model. However, few studies have investigated its uses in clinical NLP. This paper reports an application of active learning to a clinical text classification task: to determine the assertion status of clinical concepts. The annotated corpus for the assertion classification task in the 2010 i2b2/VA Clinical NLP Challenge was used in this study. We implemented several existing and newly developed active learning algorithms and assessed their uses. The outcome is reported in the global ALC score, based on the Area under the average Learning Curve of the AUC (Area Under the Curve) score. Results showed that when the same number of annotated samples was used, active learning strategies could generate better classification models (best ALC – 0.7715) than the passive learning method (random sampling) (ALC – 0.7411). Moreover, to achieve the same classification performance, active learning strategies required fewer samples than the random sampling method. For example, to achieve an AUC of 0.79, the random sampling method used 32 samples, while our best active learning algorithm required only 12 samples, a reduction of 62.5% in manual annotation effort. PMID:22127105
IT Infrastructure Components for Biobanking
Prokosch, H.U.; Beck, A.; Ganslandt, T.; Hummel, M.; Kiehntopf, M.; Sax, U.; Ückert, F.; Semler, S.
2010-01-01
Objective Within translational research projects in the recent years large biobanks have been established, mostly supported by homegrown, proprietary software solutions. No general requirements for biobanking IT infrastructures have been published yet. This paper presents an exemplary biobanking IT architecture, a requirements specification for a biorepository management tool and exemplary illustrations of three major types of requirements. Methods We have pursued a comprehensive literature review for biobanking IT solutions and established an interdisciplinary expert panel for creating the requirements specification. The exemplary illustrations were derived from a requirements analysis within two university hospitals. Results The requirements specification comprises a catalog with more than 130 detailed requirements grouped into 3 major categories and 20 subcategories. Special attention is given to multitenancy capabilities in order to support the project-specific definition of varying research and bio-banking contexts, the definition of workflows to track sample processing, sample transportation and sample storage and the automated integration of preanalytic handling and storage robots. Conclusion IT support for biobanking projects can be based on a federated architectural framework comprising primary data sources for clinical annotations, a pseudonymization service, a clinical data warehouse with a flexible and user-friendly query interface and a biorepository management system. Flexibility and scalability of all such components are vital since large medical facilities such as university hospitals will have to support biobanking for varying monocentric and multicentric research scenarios and multiple medical clients. PMID:23616851
IT Infrastructure Components for Biobanking.
Prokosch, H U; Beck, A; Ganslandt, T; Hummel, M; Kiehntopf, M; Sax, U; Uckert, F; Semler, S
2010-01-01
Within translational research projects in the recent years large biobanks have been established, mostly supported by homegrown, proprietary software solutions. No general requirements for biobanking IT infrastructures have been published yet. This paper presents an exemplary biobanking IT architecture, a requirements specification for a biorepository management tool and exemplary illustrations of three major types of requirements. We have pursued a comprehensive literature review for biobanking IT solutions and established an interdisciplinary expert panel for creating the requirements specification. The exemplary illustrations were derived from a requirements analysis within two university hospitals. The requirements specification comprises a catalog with more than 130 detailed requirements grouped into 3 major categories and 20 subcategories. Special attention is given to multitenancy capabilities in order to support the project-specific definition of varying research and bio-banking contexts, the definition of workflows to track sample processing, sample transportation and sample storage and the automated integration of preanalytic handling and storage robots. IT support for biobanking projects can be based on a federated architectural framework comprising primary data sources for clinical annotations, a pseudonymization service, a clinical data warehouse with a flexible and user-friendly query interface and a biorepository management system. Flexibility and scalability of all such components are vital since large medical facilities such as university hospitals will have to support biobanking for varying monocentric and multicentric research scenarios and multiple medical clients.
Foster, G.D.; Foreman, W.T.; Gates, Paul M.
1991-01-01
The reliability of the Goulden large-sample extractor in preconcentrating pesticides from water was evaluated from the recoveries of 35 pesticides amended to filtered stream waters. Recoveries greater than 90% were observed for many of the pesticides in each major chemical class, but recoveries for some of the individual pesticides varied in seemingly unpredictable ways. Corrections cannot yet be factored into liquid-liquid extraction theory to account for matrix effects, which were apparent between the two stream waters tested. The Goulden large-sample extractor appears to be well suited for rapid chemical screening applications, with quantitative analysis requiring special quality control considerations. ?? 1991 American Chemical Society.
Evaluation of the Biological Sampling Kit (BiSKit) for Large-Area Surface Sampling
Buttner, Mark P.; Cruz, Patricia; Stetzenbach, Linda D.; Klima-Comba, Amy K.; Stevens, Vanessa L.; Emanuel, Peter A.
2004-01-01
Current surface sampling methods for microbial contaminants are designed to sample small areas and utilize culture analysis. The total number of microbes recovered is low because a small area is sampled, making detection of a potential pathogen more difficult. Furthermore, sampling of small areas requires a greater number of samples to be collected, which delays the reporting of results, taxes laboratory resources and staffing, and increases analysis costs. A new biological surface sampling method, the Biological Sampling Kit (BiSKit), designed to sample large areas and to be compatible with testing with a variety of technologies, including PCR and immunoassay, was evaluated and compared to other surface sampling strategies. In experimental room trials, wood laminate and metal surfaces were contaminated by aerosolization of Bacillus atrophaeus spores, a simulant for Bacillus anthracis, into the room, followed by settling of the spores onto the test surfaces. The surfaces were sampled with the BiSKit, a cotton-based swab, and a foam-based swab. Samples were analyzed by culturing, quantitative PCR, and immunological assays. The results showed that the large surface area (1 m2) sampled with the BiSKit resulted in concentrations of B. atrophaeus in samples that were up to 10-fold higher than the concentrations obtained with the other methods tested. A comparison of wet and dry sampling with the BiSKit indicated that dry sampling was more efficient (efficiency, 18.4%) than wet sampling (efficiency, 11.3%). The sensitivities of detection of B. atrophaeus on metal surfaces were 42 ± 5.8 CFU/m2 for wet sampling and 100.5 ± 10.2 CFU/m2 for dry sampling. These results demonstrate that the use of a sampling device capable of sampling larger areas results in higher sensitivity than that obtained with currently available methods and has the advantage of sampling larger areas, thus requiring collection of fewer samples per site. PMID:15574898
Importance sampling large deviations in nonequilibrium steady states. I.
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T
2018-03-28
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
Importance sampling large deviations in nonequilibrium steady states. I
NASA Astrophysics Data System (ADS)
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.
2018-03-01
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
Optimizing liquid effluent monitoring at a large nuclear complex.
Chou, Charissa J; Barnett, D Brent; Johnson, Vernon G; Olson, Phil M
2003-12-01
Effluent monitoring typically requires a large number of analytes and samples during the initial or startup phase of a facility. Once a baseline is established, the analyte list and sampling frequency may be reduced. Although there is a large body of literature relevant to the initial design, few, if any, published papers exist on updating established effluent monitoring programs. This paper statistically evaluates four years of baseline data to optimize the liquid effluent monitoring efficiency of a centralized waste treatment and disposal facility at a large defense nuclear complex. Specific objectives were to: (1) assess temporal variability in analyte concentrations, (2) determine operational factors contributing to waste stream variability, (3) assess the probability of exceeding permit limits, and (4) streamline the sampling and analysis regime. Results indicated that the probability of exceeding permit limits was one in a million under normal facility operating conditions, sampling frequency could be reduced, and several analytes could be eliminated. Furthermore, indicators such as gross alpha and gross beta measurements could be used in lieu of more expensive specific isotopic analyses (radium, cesium-137, and strontium-90) for routine monitoring. Study results were used by the state regulatory agency to modify monitoring requirements for a new discharge permit, resulting in an annual cost savings of US dollars 223,000. This case study demonstrates that statistical evaluation of effluent contaminant variability coupled with process knowledge can help plant managers and regulators streamline analyte lists and sampling frequencies based on detection history and environmental risk.
EVALUATION OF SOLID ADSORBENTS FOR THE COLLECTION AND ANALYSES OF AMBIENT BIOGENIC VOLATILE ORGANICS
Micrometeorological flux measurements of biogenic volatile organic compounds (BVOCs) usually require that large volumes of air be collected (whole air samples) or focused during the sampling process (cryogenic trapping or gas-solid partitioning on adsorbents) in order to achiev...
Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A.
2016-01-01
Abstract. The use of a channelization mechanism on model observers not only makes mimicking human visual behavior possible, but also reduces the amount of image data needed to estimate the model observer parameters. The channelized Hotelling observer (CHO) and channelized scanning linear observer (CSLO) have recently been used to assess CT image quality for detection tasks and combined detection/estimation tasks, respectively. Although the use of channels substantially reduces the amount of data required to compute image quality, the number of scans required for CT imaging is still not practical for routine use. It is our desire to further reduce the number of scans required to make CHO or CSLO an image quality tool for routine and frequent system validations and evaluations. This work explores different data-reduction schemes and designs an approach that requires only a few CT scans. Three different kinds of approaches are included in this study: a conventional CHO/CSLO technique with a large sample size, a conventional CHO/CSLO technique with fewer samples, and an approach that we will show requires fewer samples to mimic conventional performance with a large sample size. The mean value and standard deviation of areas under ROC/EROC curve were estimated using the well-validated shuffle approach. The results indicate that an 80% data reduction can be achieved without loss of accuracy. This substantial data reduction is a step toward a practical tool for routine-task-based QA/QC CT system assessment. PMID:27493982
A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berres, Anne Sabine; Adhinarayanan, Vignesh; Turton, Terece
2017-05-12
Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline atmore » the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.« less
Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering
NASA Astrophysics Data System (ADS)
Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki
2018-03-01
We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.
Molecular epidemiology biomarkers-Sample collection and processing considerations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holland, Nina T.; Pfleger, Laura; Berger, Eileen
2005-08-07
Biomarker studies require processing and storage of numerous biological samples with the goals of obtaining a large amount of information and minimizing future research costs. An efficient study design includes provisions for processing of the original samples, such as cryopreservation, DNA isolation, and preparation of specimens for exposure assessment. Use of standard, two-dimensional and nanobarcodes and customized electronic databases assure efficient management of large sample collections and tracking results of data analyses. Standard operating procedures and quality control plans help to protect sample quality and to assure validity of the biomarker data. Specific state, federal and international regulations are inmore » place regarding research with human samples, governing areas including custody, safety of handling, and transport of human samples. Appropriate informed consent must be obtained from the study subjects prior to sample collection and confidentiality of results maintained. Finally, examples of three biorepositories of different scale (European Cancer Study, National Cancer Institute and School of Public Health Biorepository, University of California, Berkeley) are used to illustrate challenges faced by investigators and the ways to overcome them. New software and biorepository technologies are being developed by many companies that will help to bring biological banking to a new level required by molecular epidemiology of the 21st century.« less
Sample sizes to control error estimates in determining soil bulk density in California forest soils
Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber
2016-01-01
Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...
Determination of $sup 241$Am in soil using an automated nuclear radiation measurement laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engstrom, D.E.; White, M.G.; Dunaway, P.B.
The recent completion of REECo's Automated Laboratory and associated software systems has provided a significant increase in capability while reducing manpower requirements. The system is designed to perform gamma spectrum analyses on the large numbers of samples required by the current Nevada Applied Ecology Group (NAEG) and Plutonium Distribution Inventory Program (PDIP) soil sampling programs while maintaining sufficient sensitivities as defined by earlier investigations of the same type. The hardware and systems are generally described in this paper, with emphasis being placed on spectrum reduction and the calibration procedures used for soil samples. (auth)
Nutrients and suspended sediments in streams and large rivers are two major issues facing state and federal agencies. Accurate estimates of nutrient and sediment loads are needed to assess a variety of important water-quality issues including total maximum daily loads, aquatic ec...
Electrofishing effort required to estimate biotic condition in southern Idaho Rivers
Maret, Terry R.; Ott, Douglas S.; Herlihy, Alan T.
2007-01-01
An important issue surrounding biomonitoring in large rivers is the minimum sampling effort required to collect an adequate number of fish for accurate and precise determinations of biotic condition. During the summer of 2002, we sampled 15 randomly selected large-river sites in southern Idaho to evaluate the effects of sampling effort on an index of biotic integrity (IBI). Boat electrofishing was used to collect sample populations of fish in river reaches representing 40 and 100 times the mean channel width (MCW; wetted channel) at base flow. Minimum sampling effort was assessed by comparing the relation between reach length sampled and change in IBI score. Thirty-two species of fish in the families Catostomidae, Centrarchidae, Cottidae, Cyprinidae, Ictaluridae, Percidae, and Salmonidae were collected. Of these, 12 alien species were collected at 80% (12 of 15) of the sample sites; alien species represented about 38% of all species (N = 32) collected during the study. A total of 60% (9 of 15) of the sample sites had poor IBI scores. A minimum reach length of about 36 times MCW was determined to be sufficient for collecting an adequate number of fish for estimating biotic condition based on an IBI score. For most sites, this equates to collecting 275 fish at a site. Results may be applicable to other semiarid, fifth-order through seventh-order rivers sampled during summer low-flow conditions.
Evaluation of respondent-driven sampling.
McCreesh, Nicky; Frost, Simon D W; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda N; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G
2012-01-01
Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total population data. Total population data on age, tribe, religion, socioeconomic status, sexual activity, and HIV status were available on a population of 2402 male household heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, using current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). We recruited 927 household heads. Full and small RDS samples were largely representative of the total population, but both samples underrepresented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven sampling statistical inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven sampling bootstrap 95% confidence intervals included the population proportion. Respondent-driven sampling produced a generally representative sample of this well-connected nonhidden population. However, current respondent-driven sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience sampling method, and caution is required when interpreting findings based on the sampling method.
A self-sampling method to obtain large volumes of undiluted cervicovaginal secretions.
Boskey, Elizabeth R; Moench, Thomas R; Hees, Paul S; Cone, Richard A
2003-02-01
Studies of vaginal physiology and pathophysiology sometime require larger volumes of undiluted cervicovaginal secretions than can be obtained by current methods. A convenient method for self-sampling these secretions outside a clinical setting can facilitate such studies of reproductive health. The goal was to develop a vaginal self-sampling method for collecting large volumes of undiluted cervicovaginal secretions. A menstrual collection device (the Instead cup) was inserted briefly into the vagina to collect secretions that were then retrieved from the cup by centrifugation in a 50-ml conical tube. All 16 women asked to perform this procedure found it feasible and acceptable. Among 27 samples, an average of 0.5 g of secretions (range, 0.1-1.5 g) was collected. This is a rapid and convenient self-sampling method for obtaining relatively large volumes of undiluted cervicovaginal secretions. It should prove suitable for a wide range of assays, including those involving sexually transmitted diseases, microbicides, vaginal physiology, immunology, and pathophysiology.
Assays for the activities of polyamine biosynthetic enzymes using intact tissues
Rakesh Minocha; Stephanie Long; Hisae Maki; Subhash C. Minocha
1999-01-01
Traditionally, most enzyme assays utilize homogenized cell extracts with or without dialysis. Homogenization and centrifugation of large numbers of samples for screening of mutants and transgenic cell lines is quite cumbersome and generally requires sufficiently large amounts (hundreds of milligrams) of tissue. However, in situations where the tissue is available in...
Exploratory Factor Analysis with Small Sample Sizes
ERIC Educational Resources Information Center
de Winter, J. C. F.; Dodou, D.; Wieringa, P. A.
2009-01-01
Exploratory factor analysis (EFA) is generally regarded as a technique for large sample sizes ("N"), with N = 50 as a reasonable absolute minimum. This study offers a comprehensive overview of the conditions in which EFA can yield good quality results for "N" below 50. Simulations were carried out to estimate the minimum required "N" for different…
Sampling procedures for throughfall monitoring: A simulation study
NASA Astrophysics Data System (ADS)
Zimmermann, Beate; Zimmermann, Alexander; Lark, Richard Murray; Elsenbeer, Helmut
2010-01-01
What is the most appropriate sampling scheme to estimate event-based average throughfall? A satisfactory answer to this seemingly simple question has yet to be found, a failure which we attribute to previous efforts' dependence on empirical studies. Here we try to answer this question by simulating stochastic throughfall fields based on parameters for statistical models of large monitoring data sets. We subsequently sampled these fields with different sampling designs and variable sample supports. We evaluated the performance of a particular sampling scheme with respect to the uncertainty of possible estimated means of throughfall volumes. Even for a relative error limit of 20%, an impractically large number of small, funnel-type collectors would be required to estimate mean throughfall, particularly for small events. While stratification of the target area is not superior to simple random sampling, cluster random sampling involves the risk of being less efficient. A larger sample support, e.g., the use of trough-type collectors, considerably reduces the necessary sample sizes and eliminates the sensitivity of the mean to outliers. Since the gain in time associated with the manual handling of troughs versus funnels depends on the local precipitation regime, the employment of automatically recording clusters of long troughs emerges as the most promising sampling scheme. Even so, a relative error of less than 5% appears out of reach for throughfall under heterogeneous canopies. We therefore suspect a considerable uncertainty of input parameters for interception models derived from measured throughfall, in particular, for those requiring data of small throughfall events.
Interpolation Approach To Computer-Generated Holograms
NASA Astrophysics Data System (ADS)
Yatagai, Toyohiko
1983-10-01
A computer-generated hologram (CGH) for reconstructing independent NxN resolution points would actually require a hologram made up of NxN sampling cells. For dependent sampling points of Fourier transform CGHs, the required memory size for computation by using an interpolation technique for reconstructed image points can be reduced. We have made a mosaic hologram which consists of K x K subholograms with N x N sampling points multiplied by an appropriate weighting factor. It is shown that the mosaic hologram can reconstruct an image with NK x NK resolution points. The main advantage of the present algorithm is that a sufficiently large size hologram of NK x NK sample points is synthesized by K x K subholograms which are successively calculated from the data of N x N sample points and also successively plotted.
Efficient Bayesian mixed model analysis increases association power in large cohorts
Loh, Po-Ru; Tucker, George; Bulik-Sullivan, Brendan K; Vilhjálmsson, Bjarni J; Finucane, Hilary K; Salem, Rany M; Chasman, Daniel I; Ridker, Paul M; Neale, Benjamin M; Berger, Bonnie; Patterson, Nick; Price, Alkes L
2014-01-01
Linear mixed models are a powerful statistical tool for identifying genetic associations and avoiding confounding. However, existing methods are computationally intractable in large cohorts, and may not optimize power. All existing methods require time cost O(MN2) (where N = #samples and M = #SNPs) and implicitly assume an infinitesimal genetic architecture in which effect sizes are normally distributed, which can limit power. Here, we present a far more efficient mixed model association method, BOLT-LMM, which requires only a small number of O(MN)-time iterations and increases power by modeling more realistic, non-infinitesimal genetic architectures via a Bayesian mixture prior on marker effect sizes. We applied BOLT-LMM to nine quantitative traits in 23,294 samples from the Women’s Genome Health Study (WGHS) and observed significant increases in power, consistent with simulations. Theory and simulations show that the boost in power increases with cohort size, making BOLT-LMM appealing for GWAS in large cohorts. PMID:25642633
LACIE large area acreage estimation. [United States of America
NASA Technical Reports Server (NTRS)
Chhikara, R. S.; Feiveson, A. H. (Principal Investigator)
1979-01-01
A sample wheat acreage for a large area is obtained by multiplying its small grains acreage estimate as computed by the classification and mensuration subsystem by the best available ratio of wheat to small grains acreages obtained from historical data. In the United States, as in other countries with detailed historical data, an additional level of aggregation was required because sample allocation was made at the substratum level. The essential features of the estimation procedure for LACIE countries are included along with procedures for estimating wheat acreage in the United States.
Quantitative assessment of anthrax vaccine immunogenicity using the dried blood spot matrix.
Schiffer, Jarad M; Maniatis, Panagiotis; Garza, Ilana; Steward-Clark, Evelene; Korman, Lawrence T; Pittman, Phillip R; Mei, Joanne V; Quinn, Conrad P
2013-03-01
The collection, processing and transportation to a testing laboratory of large numbers of clinical samples during an emergency response situation present significant cost and logistical issues. Blood and serum are common clinical samples for diagnosis of disease. Serum preparation requires significant on-site equipment and facilities for immediate processing and cold storage, and significant costs for cold-chain transport to testing facilities. The dried blood spot (DBS) matrix offers an alternative to serum for rapid and efficient sample collection with fewer on-site equipment requirements and considerably lower storage and transport costs. We have developed and validated assay methods for using DBS in the quantitative anti-protective antigen IgG enzyme-linked immunosorbent assay (ELISA), one of the primary assays for assessing immunogenicity of anthrax vaccine and for confirmatory diagnosis of Bacillus anthracis infection in humans. We have also developed and validated high-throughput data analysis software to facilitate data handling for large clinical trials and emergency response. Published by Elsevier Ltd.
Small-scale dynamic confinement gap test
NASA Astrophysics Data System (ADS)
Cook, Malcolm
2011-06-01
Gap tests are routinely used to ascertain the shock sensitiveness of new explosive formulations. The tests are popular since that are easy and relatively cheap to perform. However, with modern insensitive formulations with big critical diameters, large test samples are required. This can make testing and screening of new formulations expensive since large quantities of test material are required. Thus a new test that uses significantly smaller sample quantities would be very beneficial. In this paper we describe a new small-scale test that has been designed using our CHARM ignition and growth routine in the DYNA2D hydrocode. The new test is a modified gap test and uses detonating nitromethane to provide dynamic confinement (instead of a thick metal case) whilst exposing the sample to a long duration shock wave. The long duration shock wave allows less reactive materials that are below their critical diameter, more time to react. We present details on the modelling of the test together with some preliminary experiments to demonstrate the potential of the new test method.
Laboratory Spectrometer for Wear Metal Analysis of Engine Lubricants.
1986-04-01
analysis, the acid digestion technique for sample pretreatment is the best approach available to date because of its relatively large sample size (1000...microliters or more). However, this technique has two major shortcomings limiting its application: (1) it requires the use of hydrofluoric acid (a...accuracy. Sample preparation including filtration or acid digestion may increase analysis times by 20 minutes or more. b. Repeatability In the analysis
The Marshall Islands Data Management Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoker, A.C.; Conrado, C.L.
1995-09-01
This report is a resource document of the methods and procedures used currently in the Data Management Program of the Marshall Islands Dose Assessment and Radioecology Project. Since 1973, over 60,000 environmental samples have been collected. Our program includes relational database design, programming and maintenance; sample and information management; sample tracking; quality control; and data entry, evaluation and reduction. The usefulness of scientific databases involves careful planning in order to fulfill the requirements of any large research program. Compilation of scientific results requires consolidation of information from several databases, and incorporation of new information as it is generated. The successmore » in combining and organizing all radionuclide analysis, sample information and statistical results into a readily accessible form, is critical to our project.« less
Study of sample drilling techniques for Mars sample return missions
NASA Technical Reports Server (NTRS)
Mitchell, D. C.; Harris, P. T.
1980-01-01
To demonstrate the feasibility of acquiring various surface samples for a Mars sample return mission the following tasks were performed: (1) design of a Mars rover-mounted drill system capable of acquiring crystalline rock cores; prediction of performance, mass, and power requirements for various size systems, and the generation of engineering drawings; (2) performance of simulated permafrost coring tests using a residual Apollo lunar surface drill, (3) design of a rock breaker system which can be used to produce small samples of rock chips from rocks which are too large to return to Earth, but too small to be cored with the Rover-mounted drill; (4)design of sample containers for the selected regolith cores, rock cores, and small particulate or rock samples; and (5) design of sample handling and transfer techniques which will be required through all phase of sample acquisition, processing, and stowage on-board the Earth return vehicle. A preliminary design of a light-weight Rover-mounted sampling scoop was also developed.
A prototype splitter apparatus for dividing large catches of small fish
Stapanian, Martin A.; Edwards, William H.
2012-01-01
Due to financial and time constraints, it is often necessary in fisheries studies to divide large samples of fish and estimate total catch from the subsample. The subsampling procedure may involve potential human biases or may be difficult to perform in rough conditions. We present a prototype gravity-fed splitter apparatus for dividing large samples of small fish (30–100 mm TL). The apparatus features a tapered hopper with a sliding and removable shutter. The apparatus provides a comparatively stable platform for objectively obtaining subsamples, and it can be modified to accommodate different sizes of fish and different sample volumes. The apparatus is easy to build, inexpensive, and convenient to use in the field. To illustrate the performance of the apparatus, we divided three samples (total N = 2,000 fish) composed of four fish species. Our results indicated no significant bias in estimating either the number or proportion of each species from the subsample. Use of this apparatus or a similar apparatus can help to standardize subsampling procedures in large surveys of fish. The apparatus could be used for other applications that require dividing a large amount of material into one or more smaller subsamples.
Turkish Version of Students' Ideas about Nature of Science Questionnaire: A Validation Study
ERIC Educational Resources Information Center
Cansiz, Mustafa; Cansiz, Nurcan; Tas, Yasemin; Yerdelen, Sundus
2017-01-01
Mass assessment of large samples' nature of science views has been one of the core concerns in science education research. Due to impracticality of using open-ended questionnaires or conducting interviews with large groups, another line of research has been required for mass assessment of pupils' nature of science conception meaningfully.…
Campagnolo, E.R.; Johnson, K.R.; Karpati, A.; Rubin, C.S.; Kolpin, D.W.; Meyer, M.T.; Esteban, J. Emilio; Currier, R.W.; Smith, K.; Thu, K.M.; McGeehin, M.
2002-01-01
Expansion and intensification of large-scale animal feeding operations (AFOs) in the United States has resulted in concern about environmental contamination and its potential public health impacts. The objective of this investigation was to obtain background data on a broad profile of antimicrobial residues in animal wastes and surface water and groundwater proximal to large-scale swine and poultry operations. The samples were measured for antimicrobial compounds using both radioimmunoassay and liquid chromatography/electrospray ionization-mass spectrometry (LC/ESI-MS) techniques. Multiple classes of antimicrobial compounds (commonly at concentrations of >100 μg/l) were detected in swine waste storage lagoons. In addition, multiple classes of antimicrobial compounds were detected in surface and groundwater samples collected proximal to the swine and poultry farms. This information indicates that animal waste used as fertilizer for crops may serve as a source of antimicrobial residues for the environment. Further research is required to determine if the levels of antimicrobials detected in this study are of consequence to human and/or environmental ecosystems. A comparison of the radioimmunoassay and LC/ESI-MS analytical methods documented that radioimmunoassay techniques were only appropriate for measuring residues in animal waste samples likely to contain high levels of antimicrobials. More sensitive LC/ESI-MS techniques are required in environmental samples, where low levels of antimicrobial residues are more likely.
Sample-Clock Phase-Control Feedback
NASA Technical Reports Server (NTRS)
Quirk, Kevin J.; Gin, Jonathan W.; Nguyen, Danh H.; Nguyen, Huy
2012-01-01
To demodulate a communication signal, a receiver must recover and synchronize to the symbol timing of a received waveform. In a system that utilizes digital sampling, the fidelity of synchronization is limited by the time between the symbol boundary and closest sample time location. To reduce this error, one typically uses a sample clock in excess of the symbol rate in order to provide multiple samples per symbol, thereby lowering the error limit to a fraction of a symbol time. For systems with a large modulation bandwidth, the required sample clock rate is prohibitive due to current technological barriers and processing complexity. With precise control of the phase of the sample clock, one can sample the received signal at times arbitrarily close to the symbol boundary, thus obviating the need, from a synchronization perspective, for multiple samples per symbol. Sample-clock phase-control feedback was developed for use in the demodulation of an optical communication signal, where multi-GHz modulation bandwidths would require prohibitively large sample clock frequencies for rates in excess of the symbol rate. A custom mixedsignal (RF/digital) offset phase-locked loop circuit was developed to control the phase of the 6.4-GHz clock that samples the photon-counting detector output. The offset phase-locked loop is driven by a feedback mechanism that continuously corrects for variation in the symbol time due to motion between the transmitter and receiver as well as oscillator instability. This innovation will allow significant improvements in receiver throughput; for example, the throughput of a pulse-position modulation (PPM) with 16 slots can increase from 188 Mb/s to 1.5 Gb/s.
Whale sharks target dense prey patches of sergestid shrimp off Tanzania
Rohner, Christoph A.; Armstrong, Amelia J.; Pierce, Simon J.; Prebble, Clare E. M.; Cagua, E. Fernando; Cochran, Jesse E. M.; Berumen, Michael L.; Richardson, Anthony J.
2015-01-01
Large planktivores require high-density prey patches to make feeding energetically viable. This is a major challenge for species living in tropical and subtropical seas, such as whale sharks Rhincodon typus. Here, we characterize zooplankton biomass, size structure and taxonomic composition from whale shark feeding events and background samples at Mafia Island, Tanzania. The majority of whale sharks were feeding (73%, 380 of 524 observations), with the most common behaviour being active surface feeding (87%). We used 20 samples collected from immediately adjacent to feeding sharks and an additional 202 background samples for comparison to show that plankton biomass was ∼10 times higher in patches where whale sharks were feeding (25 vs. 2.6 mg m−3). Taxonomic analyses of samples showed that the large sergestid Lucifer hanseni (∼10 mm) dominated while sharks were feeding, accounting for ∼50% of identified items, while copepods (<2 mm) dominated background samples. The size structure was skewed towards larger animals representative of L.hanseni in feeding samples. Thus, whale sharks at Mafia Island target patches of dense, large, zooplankton dominated by sergestids. Large planktivores, such as whale sharks, which generally inhabit warm oligotrophic waters, aggregate in areas where they can feed on dense prey to obtain sufficient energy. PMID:25814777
NMR methods for metabolomics of mammalian cell culture bioreactors.
Aranibar, Nelly; Reily, Michael D
2014-01-01
Metabolomics has become an important tool for measuring pools of small molecules in mammalian cell cultures expressing therapeutic proteins. NMR spectroscopy has played an important role, largely because it requires minimal sample preparation, does not require chromatographic separation, and is quantitative. The concentrations of large numbers of small molecules in the extracellular media or within the cells themselves can be measured directly on the culture supernatant and on the supernatant of the lysed cells, respectively, and correlated with endpoints such as titer, cell viability, or glycosylation patterns. The observed changes can be used to generate hypotheses by which these parameters can be optimized. This chapter focuses on the sample preparation, data acquisition, and analysis to get the most out of NMR metabolomics data from CHO cell cultures but could easily be extended to other in vitro culture systems.
NASA Astrophysics Data System (ADS)
Sargent, S.; Somers, J. M.
2015-12-01
Trace-gas eddy covariance flux measurement can be made with open-path or closed-path analyzers. Traditional closed-path trace-gas analyzers use multipass absorption cells that behave as mixing volumes, requiring high sample flow rates to achieve useful frequency response. The high sample flow rate and the need to keep the multipass cell extremely clean dictates the use of a fine-pore filter that may clog quickly. A large-capacity filter cannot be used because it would degrade the EC system frequency response. The high flow rate also requires a powerful vacuum pump, which will typically consume on the order of 1000 W. The analyzer must measure water vapor for spectroscopic and dilution corrections. Open-path analyzers are available for methane, but not for nitrous oxide. The currently available methane analyzers have low power consumption, but are very large. Their large size degrades frequency response and disturbs the air flow near the sonic anemometer. They require significant maintenance to keep the exposed multipass optical surfaces clean. Water vapor measurements for dilution and spectroscopic corrections require a separate water vapor analyzer. A new closed-path eddy covariance system for measuring nitrous oxide or methane fluxes provides an elegant solution. The analyzer (TGA200A, Campbell Scientific, Inc.) uses a thermoelectrically-cooled interband cascade laser. Its small sample-cell volume and unique sample-cell configuration (200 ml, 1.5 m single pass) provide excellent frequency response with a low-power scroll pump (240 W). A new single-tube Nafion® dryer removes most of the water vapor, and attenuates fluctuations in the residual water vapor. Finally, a vortex intake assembly eliminates the need for an intake filter without adding volume that would degrade system frequency response. Laboratory testing shows the system attenuates the water vapor dilution term by more than 99% and achieves a half-power band width of 3.5 Hz.
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
2017-10-26
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
Emperical Tests of Acceptance Sampling Plans
NASA Technical Reports Server (NTRS)
White, K. Preston, Jr.; Johnson, Kenneth L.
2012-01-01
Acceptance sampling is a quality control procedure applied as an alternative to 100% inspection. A random sample of items is drawn from a lot to determine the fraction of items which have a required quality characteristic. Both the number of items to be inspected and the criterion for determining conformance of the lot to the requirement are given by an appropriate sampling plan with specified risks of Type I and Type II sampling errors. In this paper, we present the results of empirical tests of the accuracy of selected sampling plans reported in the literature. These plans are for measureable quality characteristics which are known have either binomial, exponential, normal, gamma, Weibull, inverse Gaussian, or Poisson distributions. In the main, results support the accepted wisdom that variables acceptance plans are superior to attributes (binomial) acceptance plans, in the sense that these provide comparable protection against risks at reduced sampling cost. For the Gaussian and Weibull plans, however, there are ranges of the shape parameters for which the required sample sizes are in fact larger than the corresponding attributes plans, dramatically so for instances of large skew. Tests further confirm that the published inverse-Gaussian (IG) plan is flawed, as reported by White and Johnson (2011).
Sample size for post-marketing safety studies based on historical controls.
Wu, Yu-te; Makuch, Robert W
2010-08-01
As part of a drug's entire life cycle, post-marketing studies are an important part in the identification of rare, serious adverse events. Recently, the US Food and Drug Administration (FDA) has begun to implement new post-marketing safety mandates as a consequence of increased emphasis on safety. The purpose of this research is to provide exact sample size formula for the proposed hybrid design, based on a two-group cohort study with incorporation of historical external data. Exact sample size formula based on the Poisson distribution is developed, because the detection of rare events is our outcome of interest. Performance of exact method is compared to its approximate large-sample theory counterpart. The proposed hybrid design requires a smaller sample size compared to the standard, two-group prospective study design. In addition, the exact method reduces the number of subjects required in the treatment group by up to 30% compared to the approximate method for the study scenarios examined. The proposed hybrid design satisfies the advantages and rationale of the two-group design with smaller sample sizes generally required. 2010 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Gatti, Mario; Mereu, Maria Grazia; Tagliaferro, Claudio; Markowitsch, Jorg; Neuberger, Robert
Requirements for vocational skills in the engineering industry in Modena, Italy, and Vienna, Austria, were studied. In Modena, employees of a representative sample of 90 small, medium, and large firms in the mechanical processing, agricultural machinery, and sports car manufacturing sectors were interviewed. In Vienna, data were collected through…
Analysis of peptides using an integrated microchip HPLC-MS/MS system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirby, Brian J.; Chirica, Gabriela S.; Reichmuth, David S.
Hyphendated LC-MS techniques are quickly becoming the standard tool for protemic analyses. For large homogeneous samples, bulk processing methods and capillary injection and separation techniques are suitable. However, for analysis of small or heterogeneous samples, techniques that can manipulate picoliter samples without dilution are required or samples will be lost or corrupted; further, static nanospray-type flowrates are required to maximize SNR. Microchip-level integration of sample injection with separation and mass spectrometry allow small-volume analytes to be processed on chip and immediately injected without dilution for analysis. An on-chip HPLC was fabricated using in situ polymerization of both fixed and mobilemore » polymer monoliths. Integration of the chip with a nanospray MS emitter enables identification of peptides by the use of tandem MS. The chip is capable of analyzing of very small sample volumes (< 200 pl) in short times (< 3 min).« less
Sample size requirements for the design of reliability studies: precision consideration.
Shieh, Gwowen
2014-09-01
In multilevel modeling, the intraclass correlation coefficient based on the one-way random-effects model is routinely employed to measure the reliability or degree of resemblance among group members. To facilitate the advocated practice of reporting confidence intervals in future reliability studies, this article presents exact sample size procedures for precise interval estimation of the intraclass correlation coefficient under various allocation and cost structures. Although the suggested approaches do not admit explicit sample size formulas and require special algorithms for carrying out iterative computations, they are more accurate than the closed-form formulas constructed from large-sample approximations with respect to the expected width and assurance probability criteria. This investigation notes the deficiency of existing methods and expands the sample size methodology for the design of reliability studies that have not previously been discussed in the literature.
Bionimbus: a cloud for managing, analyzing and sharing large genomics datasets
Heath, Allison P; Greenway, Matthew; Powell, Raymond; Spring, Jonathan; Suarez, Rafael; Hanley, David; Bandlamudi, Chai; McNerney, Megan E; White, Kevin P; Grossman, Robert L
2014-01-01
Background As large genomics and phenotypic datasets are becoming more common, it is increasingly difficult for most researchers to access, manage, and analyze them. One possible approach is to provide the research community with several petabyte-scale cloud-based computing platforms containing these data, along with tools and resources to analyze it. Methods Bionimbus is an open source cloud-computing platform that is based primarily upon OpenStack, which manages on-demand virtual machines that provide the required computational resources, and GlusterFS, which is a high-performance clustered file system. Bionimbus also includes Tukey, which is a portal, and associated middleware that provides a single entry point and a single sign on for the various Bionimbus resources; and Yates, which automates the installation, configuration, and maintenance of the software infrastructure required. Results Bionimbus is used by a variety of projects to process genomics and phenotypic data. For example, it is used by an acute myeloid leukemia resequencing project at the University of Chicago. The project requires several computational pipelines, including pipelines for quality control, alignment, variant calling, and annotation. For each sample, the alignment step requires eight CPUs for about 12 h. BAM file sizes ranged from 5 GB to 10 GB for each sample. Conclusions Most members of the research community have difficulty downloading large genomics datasets and obtaining sufficient storage and computer resources to manage and analyze the data. Cloud computing platforms, such as Bionimbus, with data commons that contain large genomics datasets, are one choice for broadening access to research data in genomics. PMID:24464852
Accelerating root system phenotyping of seedlings through a computer-assisted processing pipeline.
Dupuy, Lionel X; Wright, Gladys; Thompson, Jacqueline A; Taylor, Anna; Dekeyser, Sebastien; White, Christopher P; Thomas, William T B; Nightingale, Mark; Hammond, John P; Graham, Neil S; Thomas, Catherine L; Broadley, Martin R; White, Philip J
2017-01-01
There are numerous systems and techniques to measure the growth of plant roots. However, phenotyping large numbers of plant roots for breeding and genetic analyses remains challenging. One major difficulty is to achieve high throughput and resolution at a reasonable cost per plant sample. Here we describe a cost-effective root phenotyping pipeline, on which we perform time and accuracy benchmarking to identify bottlenecks in such pipelines and strategies for their acceleration. Our root phenotyping pipeline was assembled with custom software and low cost material and equipment. Results show that sample preparation and handling of samples during screening are the most time consuming task in root phenotyping. Algorithms can be used to speed up the extraction of root traits from image data, but when applied to large numbers of images, there is a trade-off between time of processing the data and errors contained in the database. Scaling-up root phenotyping to large numbers of genotypes will require not only automation of sample preparation and sample handling, but also efficient algorithms for error detection for more reliable replacement of manual interventions.
Crock, J.G.; Severson, R.C.; Gough, L.P.
1992-01-01
Recent investigations on the Kenai Peninsula had two major objectives: (1) to establish elemental baseline concentrations ranges for native vegetation and soils; and, (2) to determine the sampling density required for preparing stable regional geochemical maps for various elements in native plants and soils. These objectives were accomplished using an unbalanced, nested analysis-of-variance (ANOVA) barbell sampling design. Hylocomium splendens (Hedw.) BSG (feather moss, whole plant), Picea glauca (Moench) Voss (white spruce, twigs and needles), and soil horizons (02 and C) were collected and analyzed for major and trace total element concentrations. Using geometric means and geometric deviations, expected baseline ranges for elements were calculated. Results of the ANOVA show that intensive soil or plant sampling is needed to reliably map the geochemistry of the area, due to large local variability. For example, producing reliable element maps of feather moss using a 50 km cell (at 95% probability) would require sampling densities of from 4 samples per cell for Al, Co, Fe, La, Li, and V, to more than 15 samples per cell for Cu, Pb, Se, and Zn.Recent investigations on the Kenai Peninsula had two major objectives: (1) to establish elemental baseline concentrations ranges for native vegetation and soils; and, (2) to determine the sampling density required for preparing stable regional geochemical maps for various elements in native plants and soils. These objectives were accomplished using an unbalanced, nested analysis-of-variance (ANOVA) barbell sampling design. Hylocomium splendens (Hedw.) BSG (feather moss, whole plant), Picea glauca (Moench) Voss (white spruce, twigs and needles), and soil horizons (02 and C) were collected and analyzed for major and trace total element concentrations. Using geometric means and geometric deviations, expected baseline ranges for elements were calculated. Results of the ANOVA show that intensive soil or plant sampling is needed to reliably map the geochemistry of the area, due to large local variability. For example, producing reliable element maps of feather moss using a 50 km cell (at 95% probability) would require sampling densities of from 4 samples per cell Al, Co, Fe, La, Li, and V, to more than 15 samples per cell for Cu, Pb, Se, and Zn.
NASA Astrophysics Data System (ADS)
Lateh, Masitah Abdul; Kamilah Muda, Azah; Yusof, Zeratul Izzah Mohd; Azilah Muda, Noor; Sanusi Azmi, Mohd
2017-09-01
The emerging era of big data for past few years has led to large and complex data which needed faster and better decision making. However, the small dataset problems still arise in a certain area which causes analysis and decision are hard to make. In order to build a prediction model, a large sample is required as a training sample of the model. Small dataset is insufficient to produce an accurate prediction model. This paper will review an artificial data generation approach as one of the solution to solve the small dataset problem.
Gonzalez, Susana; Yu, Woojin M.; Smith, Michael S.; Slack, Kristen N.; Rotterdam, Heidrun; Abrams, Julian A.; Lightdale, Charles J.
2011-01-01
Background Several types of forceps are available for use in sampling Barrett’s esophagus (BE). Few data exist with regard to biopsy quality for histologic assessment. Objective To evaluate sampling quality of 3 different forceps in patients with BE. Design Single-center, randomized clinical trial. Patients Consecutive patients with BE undergoing upper endoscopy. Interventions Patients randomized to have biopsy specimens taken with 1 of 3 types of forceps: standard, large capacity, or jumbo. Main Outcome Measurements Specimen adequacy was defined a priori as a well-oriented biopsy sample 2 mm or greater in diameter and with at least muscularis mucosa present. Results A total of 65 patients were enrolled and analyzed (standard forceps, n = 21; large-capacity forceps, n = 21; jumbo forceps, n = 23). Compared with jumbo forceps, a significantly higher proportion of biopsy samples with large-capacity forceps were adequate (37.8% vs 25.2%, P = .002). Of the standard forceps biopsy samples, 31.9% were adequate, which was not significantly different from specimens taken with large-capacity (P = .20) or jumbo (P = .09) forceps. Biopsy specimens taken with jumbo forceps had the largest diameter (median, 3.0 mm vs 2.5 mm [standard] vs 2.8 mm [large capacity]; P = .0001). However, jumbo forceps had the lowest proportion of specimens that were well oriented (overall P = .001). Limitations Heterogeneous patient population precluded dysplasia detection analyses. Conclusions Our results challenge the requirement of jumbo forceps and therapeutic endoscopes to properly perform the Seattle protocol. We found that standard and large-capacity forceps used with standard upper endoscopes produced biopsy samples at least as adequate as those obtained with jumbo forceps and therapeutic endoscopes in patients with BE. PMID:21034895
Phenotypic Association Analyses With Copy Number Variation in Recurrent Depressive Disorder.
Rucker, James J H; Tansey, Katherine E; Rivera, Margarita; Pinto, Dalila; Cohen-Woods, Sarah; Uher, Rudolf; Aitchison, Katherine J; Craddock, Nick; Owen, Michael J; Jones, Lisa; Jones, Ian; Korszun, Ania; Barnes, Michael R; Preisig, Martin; Mors, Ole; Maier, Wolfgang; Rice, John; Rietschel, Marcella; Holsboer, Florian; Farmer, Anne E; Craig, Ian W; Scherer, Stephen W; McGuffin, Peter; Breen, Gerome
2016-02-15
Defining the molecular genomic basis of the likelihood of developing depressive disorder is a considerable challenge. We previously associated rare, exonic deletion copy number variants (CNV) with recurrent depressive disorder (RDD). Sex chromosome abnormalities also have been observed to co-occur with RDD. In this reanalysis of our RDD dataset (N = 3106 cases; 459 screened control samples and 2699 population control samples), we further investigated the role of larger CNVs and chromosomal abnormalities in RDD and performed association analyses with clinical data derived from this dataset. We found an enrichment of Turner's syndrome among cases of depression compared with the frequency observed in a large population sample (N = 34,910) of live-born infants collected in Denmark (two-sided p = .023, odds ratio = 7.76 [95% confidence interval = 1.79-33.6]), a case of diploid/triploid mosaicism, and several cases of uniparental isodisomy. In contrast to our previous analysis, large deletion CNVs were no more frequent in cases than control samples, although deletion CNVs in cases contained more genes than control samples (two-sided p = .0002). After statistical correction for multiple comparisons, our data do not support a substantial role for CNVs in RDD, although (as has been observed in similar samples) occasional cases may harbor large variants with etiological significance. Genetic pleiotropy and sample heterogeneity suggest that very large sample sizes are required to study conclusively the role of genetic variation in mood disorders. Copyright © 2016 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Evaluation of Respondent-Driven Sampling
McCreesh, Nicky; Frost, Simon; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda Ndagire; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G
2012-01-01
Background Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex-workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total-population data. Methods Total-population data on age, tribe, religion, socioeconomic status, sexual activity and HIV status were available on a population of 2402 male household-heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, employing current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). Results We recruited 927 household-heads. Full and small RDS samples were largely representative of the total population, but both samples under-represented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven-sampling statistical-inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven-sampling bootstrap 95% confidence intervals included the population proportion. Conclusions Respondent-driven sampling produced a generally representative sample of this well-connected non-hidden population. However, current respondent-driven-sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience-sampling method, and caution is required when interpreting findings based on the sampling method. PMID:22157309
Preparation of water samples for carbon-14 dating
Feltz, H.R.; Hanshaw, Bruce B.
1963-01-01
For most natural water, a large sample is required to provide the 3 grams of carbon needed for a carbon-14 determination. A field procedure for isolating total dissolved-carbonate species is described. Carbon dioxide gas is evolved by adding sulfuric acid to the water sample; the gas is then collected in a sodium hydroxide trap by recycling in a closed system. The trap is then transported to the dating laboratory where the carbon-14 is counted.
Switch Hands! Mapping Proactive and Reactive Cognitive Control across the Life Span
ERIC Educational Resources Information Center
Van Gerven, Pascal W. M.; Hurks, Petra P. M.; Bovend'Eerdt, Thamar J. H.; Adam, Jos J.
2016-01-01
We investigated the effects of age on proactive and reactive cognitive control in a large population sample of 809 individuals, ranging in age between 5 and 97 years. For that purpose, we used an anticue paradigm, which required a consistent remapping of cue location and response hand: Left-sided cues required right-hand responses and vice versa.…
Effect of initial planting spacing on wood properties of unthinned loblolly pine at age 21
Alexander III Clark; Lewis Jordan; Laurie Schimleck; Richard F. Daniels
2008-01-01
Young, fast growing, intensively managed plantation loblolly pine (Pinus taeda L.) contains a large proportion of juvenile wood that may not have the stiffness required to meet the design requirements for southern pine dimension lumber. An unthinned loblolly pine spacing study was sampled to determine the effect of initial spacing on wood stiffness,...
Pair-barcode high-throughput sequencing for large-scale multiplexed sample analysis
2012-01-01
Background The multiplexing becomes the major limitation of the next-generation sequencing (NGS) in application to low complexity samples. Physical space segregation allows limited multiplexing, while the existing barcode approach only permits simultaneously analysis of up to several dozen samples. Results Here we introduce pair-barcode sequencing (PBS), an economic and flexible barcoding technique that permits parallel analysis of large-scale multiplexed samples. In two pilot runs using SOLiD sequencer (Applied Biosystems Inc.), 32 independent pair-barcoded miRNA libraries were simultaneously discovered by the combination of 4 unique forward barcodes and 8 unique reverse barcodes. Over 174,000,000 reads were generated and about 64% of them are assigned to both of the barcodes. After mapping all reads to pre-miRNAs in miRBase, different miRNA expression patterns are captured from the two clinical groups. The strong correlation using different barcode pairs and the high consistency of miRNA expression in two independent runs demonstrates that PBS approach is valid. Conclusions By employing PBS approach in NGS, large-scale multiplexed pooled samples could be practically analyzed in parallel so that high-throughput sequencing economically meets the requirements of samples which are low sequencing throughput demand. PMID:22276739
Pair-barcode high-throughput sequencing for large-scale multiplexed sample analysis.
Tu, Jing; Ge, Qinyu; Wang, Shengqin; Wang, Lei; Sun, Beili; Yang, Qi; Bai, Yunfei; Lu, Zuhong
2012-01-25
The multiplexing becomes the major limitation of the next-generation sequencing (NGS) in application to low complexity samples. Physical space segregation allows limited multiplexing, while the existing barcode approach only permits simultaneously analysis of up to several dozen samples. Here we introduce pair-barcode sequencing (PBS), an economic and flexible barcoding technique that permits parallel analysis of large-scale multiplexed samples. In two pilot runs using SOLiD sequencer (Applied Biosystems Inc.), 32 independent pair-barcoded miRNA libraries were simultaneously discovered by the combination of 4 unique forward barcodes and 8 unique reverse barcodes. Over 174,000,000 reads were generated and about 64% of them are assigned to both of the barcodes. After mapping all reads to pre-miRNAs in miRBase, different miRNA expression patterns are captured from the two clinical groups. The strong correlation using different barcode pairs and the high consistency of miRNA expression in two independent runs demonstrates that PBS approach is valid. By employing PBS approach in NGS, large-scale multiplexed pooled samples could be practically analyzed in parallel so that high-throughput sequencing economically meets the requirements of samples which are low sequencing throughput demand.
Choosing the Most Effective Pattern Classification Model under Learning-Time Constraint.
Saito, Priscila T M; Nakamura, Rodrigo Y M; Amorim, Willian P; Papa, João P; de Rezende, Pedro J; Falcão, Alexandre X
2015-01-01
Nowadays, large datasets are common and demand faster and more effective pattern analysis techniques. However, methodologies to compare classifiers usually do not take into account the learning-time constraints required by applications. This work presents a methodology to compare classifiers with respect to their ability to learn from classification errors on a large learning set, within a given time limit. Faster techniques may acquire more training samples, but only when they are more effective will they achieve higher performance on unseen testing sets. We demonstrate this result using several techniques, multiple datasets, and typical learning-time limits required by applications.
Novel diamond cells for neutron diffraction using multi-carat CVD anvils.
Boehler, R; Molaison, J J; Haberl, B
2017-08-01
Traditionally, neutron diffraction at high pressure has been severely limited in pressure because low neutron flux required large sample volumes and therefore large volume presses. At the high-flux Spallation Neutron Source at the Oak Ridge National Laboratory, we have developed new, large-volume diamond anvil cells for neutron diffraction. The main features of these cells are multi-carat, single crystal chemical vapor deposition diamonds, very large diffraction apertures, and gas membranes to accommodate pressure stability, especially upon cooling. A new cell has been tested for diffraction up to 40 GPa with an unprecedented sample volume of ∼0.15 mm 3 . High quality spectra were obtained in 1 h for crystalline Ni and in ∼8 h for disordered glassy carbon. These new techniques will open the way for routine megabar neutron diffraction experiments.
Fatigue Crack Propagation in Rail Steels
DOT National Transportation Integrated Search
1977-06-01
In order to establish safe inspection periods of railroad rails, information on fatigue crack growth rates is required. These data should come from a sufficiently large sample of rails presently in service. The reported research consisted of the gene...
Cheow, Lih Feng; Viswanathan, Ramya; Chin, Chee-Sing; Jennifer, Nancy; Jones, Robert C; Guccione, Ernesto; Quake, Stephen R; Burkholder, William F
2014-10-07
Homogeneous assay platforms for measuring protein-ligand interactions are highly valued due to their potential for high-throughput screening. However, the implementation of these multiplexed assays in conventional microplate formats is considerably expensive due to the large amounts of reagents required and the need for automation. We implemented a homogeneous fluorescence anisotropy-based binding assay in an automated microfluidic chip to simultaneously interrogate >2300 pairwise interactions. We demonstrated the utility of this platform in determining the binding affinities between chromatin-regulatory proteins and different post-translationally modified histone peptides. The microfluidic chip assay produces comparable results to conventional microtiter plate assays, yet requires 2 orders of magnitude less sample and an order of magnitude fewer pipetting steps. This approach enables one to use small samples for medium-scale screening and could ease the bottleneck of large-scale protein purification.
Automatic measurements and computations for radiochemical analyses
Rosholt, J.N.; Dooley, J.R.
1960-01-01
In natural radioactive sources the most important radioactive daughter products useful for geochemical studies are protactinium-231, the alpha-emitting thorium isotopes, and the radium isotopes. To resolve the abundances of these thorium and radium isotopes by their characteristic decay and growth patterns, a large number of repeated alpha activity measurements on the two chemically separated elements were made over extended periods of time. Alpha scintillation counting with automatic measurements and sample changing is used to obtain the basic count data. Generation of the required theoretical decay and growth functions, varying with time, and the least squares solution of the overdetermined simultaneous count rate equations are done with a digital computer. Examples of the complex count rate equations which may be solved and results of a natural sample containing four ??-emitting isotopes of thorium are illustrated. These methods facilitate the determination of the radioactive sources on the large scale required for many geochemical investigations.
Taniguchi, Hidetaka; Sato, Hiroshi; Shirakawa, Tomohiro
2018-05-09
Human learners can generalize a new concept from a small number of samples. In contrast, conventional machine learning methods require large amounts of data to address the same types of problems. Humans have cognitive biases that promote fast learning. Here, we developed a method to reduce the gap between human beings and machines in this type of inference by utilizing cognitive biases. We implemented a human cognitive model into machine learning algorithms and compared their performance with the currently most popular methods, naïve Bayes, support vector machine, neural networks, logistic regression and random forests. We focused on the task of spam classification, which has been studied for a long time in the field of machine learning and often requires a large amount of data to obtain high accuracy. Our models achieved superior performance with small and biased samples in comparison with other representative machine learning methods.
Trace Gas Analyzer (TGA) program
NASA Technical Reports Server (NTRS)
1977-01-01
The design, fabrication, and test of a breadboard trace gas analyzer (TGA) is documented. The TGA is a gas chromatograph/mass spectrometer system. The gas chromatograph subsystem employs a recirculating hydrogen carrier gas. The recirculation feature minimizes the requirement for transport and storage of large volumes of carrier gas during a mission. The silver-palladium hydrogen separator which permits the removal of the carrier gas and its reuse also decreases vacuum requirements for the mass spectrometer since the mass spectrometer vacuum system need handle only the very low sample pressure, not sample plus carrier. System performance was evaluated with a representative group of compounds.
A High Performance Impedance-based Platform for Evaporation Rate Detection.
Chou, Wei-Lung; Lee, Pee-Yew; Chen, Cheng-You; Lin, Yu-Hsin; Lin, Yung-Sheng
2016-10-17
This paper describes the method of a novel impedance-based platform for the detection of the evaporation rate. The model compound hyaluronic acid was employed here for demonstration purposes. Multiple evaporation tests on the model compound as a humectant with various concentrations in solutions were conducted for comparison purposes. A conventional weight loss approach is known as the most straightforward, but time-consuming, measurement technique for evaporation rate detection. Yet, a clear disadvantage is that a large volume of sample is required and multiple sample tests cannot be conducted at the same time. For the first time in literature, an electrical impedance sensing chip is successfully applied to a real-time evaporation investigation in a time sharing, continuous and automatic manner. Moreover, as little as 0.5 ml of test samples is required in this impedance-based apparatus, and a large impedance variation is demonstrated among various dilute solutions. The proposed high-sensitivity and fast-response impedance sensing system is found to outperform a conventional weight loss approach in terms of evaporation rate detection.
A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing
Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian
2016-01-01
Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623
Palmer, W G; Scholz, R C; Moorman, W J
1983-03-01
Sampling of complex mixtures of airborne contaminants for chronic animal toxicity tests often involves numerous sampling devices, requires extensive sampling time, and yields forms of collected materials unsuitable for administration to animals. A method is described which used a high volume, wet venturi scrubber for collection of respirable fractions of emissions from iron foundry casting operations. The construction and operation of the sampler are presented along with collection efficiency data and its application to the preparation of large quantities of samples to be administered to animals by intratracheal instillation.
Spatial considerations during cryopreservation of a large volume sample.
Kilbride, Peter; Lamb, Stephen; Milne, Stuart; Gibbons, Stephanie; Erro, Eloy; Bundy, James; Selden, Clare; Fuller, Barry; Morris, John
2016-08-01
There have been relatively few studies on the implications of the physical conditions experienced by cells during large volume (litres) cryopreservation - most studies have focused on the problem of cryopreservation of smaller volumes, typically up to 2 ml. This study explores the effects of ice growth by progressive solidification, generally seen during larger scale cryopreservation, on encapsulated liver hepatocyte spheroids, and it develops a method to reliably sample different regions across the frozen cores of samples experiencing progressive solidification. These issues are examined in the context of a Bioartificial Liver Device which requires cryopreservation of a 2 L volume in a strict cylindrical geometry for optimal clinical delivery. Progressive solidification cannot be avoided in this arrangement. In such a system optimal cryoprotectant concentrations and cooling rates are known. However, applying these parameters to a large volume is challenging due to the thermal mass and subsequent thermal lag. The specific impact of this to the cryopreservation outcome is required. Under conditions of progressive solidification, the spatial location of Encapsulated Liver Spheroids had a strong impact on post-thaw recovery. Cells in areas first and last to solidify demonstrated significantly impaired post-thaw function, whereas areas solidifying through the majority of the process exhibited higher post-thaw outcome. It was also found that samples where the ice thawed more rapidly had greater post-thaw viability 24 h post-thaw (75.7 ± 3.9% and 62.0 ± 7.2% respectively). These findings have implications for the cryopreservation of large volumes with a rigid shape and for the cryopreservation of a Bioartificial Liver Device. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Field, M. Paul; Romaniello, Stephen; Gordon, Gwyneth W.; Anbar, Ariel D.; Herrmann, Achim; Martinez-Boti, Miguel A.; Anagnostou, Eleni; Foster, Gavin L.
2014-05-01
MC-ICP-MS has dramatically improved the analytical throughput for high-precision radiogenic and non-traditional isotope ratio measurements, compared to TIMS. The generation of large data sets, however, remains hampered by tedious manual drip chromatography required for sample purification. A new, automated chromatography system reduces the laboratory bottle neck and expands the utility of high-precision isotope analyses in applications where large data sets are required: geochemistry, forensic anthropology, nuclear forensics, medical research and food authentication. We have developed protocols to automate ion exchange purification for several isotopic systems (B, Ca, Fe, Cu, Zn, Sr, Cd, Pb and U) using the new prepFAST-MC™ (ESI, Nebraska, Omaha). The system is not only inert (all-flouropolymer flow paths), but is also very flexible and can easily facilitate different resins, samples, and reagent types. When programmed, precise and accurate user defined volumes and flow rates are implemented to automatically load samples, wash the column, condition the column and elute fractions. Unattended, the automated, low-pressure ion exchange chromatography system can process up to 60 samples overnight. Excellent reproducibility, reliability, recovery, with low blank and carry over for samples in a variety of different matrices, have been demonstrated to give accurate and precise isotopic ratios within analytical error for several isotopic systems (B, Ca, Fe, Cu, Zn, Sr, Cd, Pb and U). This illustrates the potential of the new prepFAST-MC™ (ESI, Nebraska, Omaha) as a powerful tool in radiogenic and non-traditional isotope research.
Impact of lunar and planetary missions on the space station: Preliminary STS logistics report
NASA Technical Reports Server (NTRS)
1984-01-01
Space station requirements for lunar and planetary missions are discussed. Specific reference is made to projected Ceres and Kopff missions; Titan probes; Saturn and Mercury orbiters; and a Mars sample return mission. Such requirements as base design; station function; program definition; mission scenarios; uncertainties impact; launch manifest and mission schedule; and shuttle loads are considered. It is concluded that: (1) the impact of the planetary missions on the space station is not large when compared to the lunar base; (2) a quarantine module may be desirable for sample returns; (3) the Ceres and Kopff missions require the ability to stack and checkout two-stage OTVs; and (4) two to seven manweeks of on-orbit work are required of the station crew to launch a mission and, with the exception of the quarantine module, dedicated crew will not be required.
Robust sampling of decision information during perceptual choice
Vandormael, Hildward; Herce Castañón, Santiago; Balaguer, Jan; Li, Vickie; Summerfield, Christopher
2017-01-01
Humans move their eyes to gather information about the visual world. However, saccadic sampling has largely been explored in paradigms that involve searching for a lone target in a cluttered array or natural scene. Here, we investigated the policy that humans use to overtly sample information in a perceptual decision task that required information from across multiple spatial locations to be combined. Participants viewed a spatial array of numbers and judged whether the average was greater or smaller than a reference value. Participants preferentially sampled items that were less diagnostic of the correct answer (“inlying” elements; that is, elements closer to the reference value). This preference to sample inlying items was linked to decisions, enhancing the tendency to give more weight to inlying elements in the final choice (“robust averaging”). These findings contrast with a large body of evidence indicating that gaze is directed preferentially to deviant information during natural scene viewing and visual search, and suggest that humans may sample information “robustly” with their eyes during perceptual decision-making. PMID:28223519
NASA Astrophysics Data System (ADS)
Ryu, Inkeon; Kim, Daekeun
2018-04-01
A typical selective plane illumination microscopy (SPIM) image size is basically limited by the field of view, which is a characteristic of the objective lens. If an image larger than the imaging area of the sample is to be obtained, image stitching, which combines step-scanned images into a single panoramic image, is required. However, accurately registering the step-scanned images is very difficult because the SPIM system uses a customized sample mount where uncertainties for the translational and the rotational motions exist. In this paper, an image registration technique based on multiple fluorescent microsphere tracking is proposed in the view of quantifying the constellations and measuring the distances between at least two fluorescent microspheres embedded in the sample. Image stitching results are demonstrated for optically cleared large tissue with various staining methods. Compensation for the effect of the sample rotation that occurs during the translational motion in the sample mount is also discussed.
Dainer-Best, Justin; Lee, Hae Yeon; Shumake, Jason D; Yeager, David S; Beevers, Christopher G
2018-06-07
Although the self-referent encoding task (SRET) is commonly used to measure self-referent cognition in depression, many different SRET metrics can be obtained. The current study used best subsets regression with cross-validation and independent test samples to identify the SRET metrics most reliably associated with depression symptoms in three large samples: a college student sample (n = 572), a sample of adults from Amazon Mechanical Turk (n = 293), and an adolescent sample from a school field study (n = 408). Across all 3 samples, SRET metrics associated most strongly with depression severity included number of words endorsed as self-descriptive and rate of accumulation of information required to decide whether adjectives were self-descriptive (i.e., drift rate). These metrics had strong intratask and split-half reliability and high test-retest reliability across a 1-week period. Recall of SRET stimuli and traditional reaction time (RT) metrics were not robustly associated with depression severity. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Estimating accuracy of land-cover composition from two-stage cluster sampling
Stehman, S.V.; Wickham, J.D.; Fattorini, L.; Wade, T.D.; Baffetta, F.; Smith, J.H.
2009-01-01
Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), root mean square error (RMSE), and correlation (CORR) to quantify accuracy of land-cover composition for a general two-stage cluster sampling design, and for the special case of simple random sampling without replacement (SRSWOR) at each stage. The bias of the estimators for the two-stage SRSWOR design is evaluated via a simulation study. The estimators of RMSE and CORR have small bias except when sample size is small and the land-cover class is rare. The estimator of MAD is biased for both rare and common land-cover classes except when sample size is large. A general recommendation is that rare land-cover classes require large sample sizes to ensure that the accuracy estimators have small bias. ?? 2009 Elsevier Inc.
Bionimbus: a cloud for managing, analyzing and sharing large genomics datasets.
Heath, Allison P; Greenway, Matthew; Powell, Raymond; Spring, Jonathan; Suarez, Rafael; Hanley, David; Bandlamudi, Chai; McNerney, Megan E; White, Kevin P; Grossman, Robert L
2014-01-01
As large genomics and phenotypic datasets are becoming more common, it is increasingly difficult for most researchers to access, manage, and analyze them. One possible approach is to provide the research community with several petabyte-scale cloud-based computing platforms containing these data, along with tools and resources to analyze it. Bionimbus is an open source cloud-computing platform that is based primarily upon OpenStack, which manages on-demand virtual machines that provide the required computational resources, and GlusterFS, which is a high-performance clustered file system. Bionimbus also includes Tukey, which is a portal, and associated middleware that provides a single entry point and a single sign on for the various Bionimbus resources; and Yates, which automates the installation, configuration, and maintenance of the software infrastructure required. Bionimbus is used by a variety of projects to process genomics and phenotypic data. For example, it is used by an acute myeloid leukemia resequencing project at the University of Chicago. The project requires several computational pipelines, including pipelines for quality control, alignment, variant calling, and annotation. For each sample, the alignment step requires eight CPUs for about 12 h. BAM file sizes ranged from 5 GB to 10 GB for each sample. Most members of the research community have difficulty downloading large genomics datasets and obtaining sufficient storage and computer resources to manage and analyze the data. Cloud computing platforms, such as Bionimbus, with data commons that contain large genomics datasets, are one choice for broadening access to research data in genomics. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
C.R. Lane; E. Hobden; L. Laurenson; V.C. Barton; K.J.D. Hughes; H. Swan; N. Boonham; A.J. Inman
2008-01-01
Plant health regulations to prevent the introduction and spread of Phytophthora ramorum and P. kernoviae require rapid, cost effective diagnostic methods for screening large numbers of plant samples at the time of inspection. Current on-site techniques require expensive equipment, considerable expertise and are not suited for plant...
3D-Laser-Scanning Technique Applied to Bulk Density Measurements of Apollo Lunar Samples
NASA Technical Reports Server (NTRS)
Macke, R. J.; Kent, J. J.; Kiefer, W. S.; Britt, D. T.
2015-01-01
In order to better interpret gravimetric data from orbiters such as GRAIL and LRO to understand the subsurface composition and structure of the lunar crust, it is import to have a reliable database of the density and porosity of lunar materials. To this end, we have been surveying these physical properties in both lunar meteorites and Apollo lunar samples. To measure porosity, both grain density and bulk density are required. For bulk density, our group has historically utilized sub-mm bead immersion techniques extensively, though several factors have made this technique problematic for our work with Apollo samples. Samples allocated for measurement are often smaller than optimal for the technique, leading to large error bars. Also, for some samples we were required to use pure alumina beads instead of our usual glass beads. The alumina beads were subject to undesirable static effects, producing unreliable results. Other investigators have tested the use of 3d laser scanners on meteorites for measuring bulk volumes. Early work, though promising, was plagued with difficulties including poor response on dark or reflective surfaces, difficulty reproducing sharp edges, and large processing time for producing shape models. Due to progress in technology, however, laser scanners have improved considerably in recent years. We tested this technique on 27 lunar samples in the Apollo collection using a scanner at NASA Johnson Space Center. We found it to be reliable and more precise than beads, with the added benefit that it involves no direct contact with the sample, enabling the study of particularly friable samples for which bead immersion is not possible
Decoder calibration with ultra small current sample set for intracortical brain-machine interface
NASA Astrophysics Data System (ADS)
Zhang, Peng; Ma, Xuan; Chen, Luyao; Zhou, Jin; Wang, Changyong; Li, Wei; He, Jiping
2018-04-01
Objective. Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. Approach. Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. Main results. The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. Significance. (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application of intracortical brain-machine interfaces in clinical practice.
Novel diamond cells for neutron diffraction using multi-carat CVD anvils
Boehler, R.; Molaison, J. J.; Haberl, B.
2017-08-17
Traditionally, neutron diffraction at high pressure has been severely limited in pressure because low neutron flux required large sample volumes and therefore large volume presses. At the high-flux Spallation Neutron Source at the Oak Ridge National Laboratory, we have developed in this paper new, large-volume diamond anvil cells for neutron diffraction. The main features of these cells are multi-carat, single crystal chemical vapor deposition diamonds, very large diffraction apertures, and gas membranes to accommodate pressure stability, especially upon cooling. A new cell has been tested for diffraction up to 40 GPa with an unprecedented sample volume of ~0.15 mm 3.more » High quality spectra were obtained in 1 h for crystalline Ni and in ~8 h for disordered glassy carbon. Finally, these new techniques will open the way for routine megabar neutron diffraction experiments.« less
A rapid method for the sampling of atmospheric water vapour for isotopic analysis.
Peters, Leon I; Yakir, Dan
2010-01-01
Analysis of the stable isotopic composition of atmospheric moisture is widely applied in the environmental sciences. Traditional methods for obtaining isotopic compositional data from ambient moisture have required complicated sampling procedures, expensive and sophisticated distillation lines, hazardous consumables, and lengthy treatments prior to analysis. Newer laser-based techniques are expensive and usually not suitable for large-scale field campaigns, especially in cases where access to mains power is not feasible or high spatial coverage is required. Here we outline the construction and usage of a novel vapour-sampling system based on a battery-operated Stirling cycle cooler, which is simple to operate, does not require any consumables, or post-collection distillation, and is light-weight and highly portable. We demonstrate the ability of this system to reproduce delta(18)O isotopic compositions of ambient water vapour, with samples taken simultaneously by a traditional cryogenic collection technique. Samples were collected over 1 h directly into autosampler vials and were analysed by mass spectrometry after pyrolysis of 1 microL aliquots to CO. This yielded an average error of < +/-0.5 per thousand, approximately equal to the signal-to-noise ratio of traditional approaches. This new system provides a rapid and reliable alternative to conventional cryogenic techniques, particularly in cases requiring high sample throughput or where access to distillation lines, slurry maintenance or mains power is not feasible. Copyright 2009 John Wiley & Sons, Ltd.
VALIDATION OF STANDARD ANALYTICAL PROTOCOL FOR ...
There is a growing concern with the potential for terrorist use of chemical weapons to cause civilian harm. In the event of an actual or suspected outdoor release of chemically hazardous material in a large area, the extent of contamination must be determined. This requires a system with the ability to prepare and quickly analyze a large number of contaminated samples for the traditional chemical agents, as well as numerous toxic industrial chemicals. Liquid samples (both aqueous and organic), solid samples (e.g., soil), vapor samples (e.g., air) and mixed state samples, all ranging from household items to deceased animals, may require some level of analyses. To meet this challenge, the U.S. Environmental Protection Agency (U.S. EPA) National Homeland Security Research Center, in collaboration with experts from across U.S. EPA and other Federal Agencies, initiated an effort to identify analytical methods for the chemical and biological agents that could be used to respond to a terrorist attack or a homeland security incident. U.S. EPA began development of standard analytical protocols (SAPs) for laboratory identification and measurement of target agents in case of a contamination threat. These methods will be used to help assist in the identification of existing contamination, the effectiveness of decontamination, as well as clearance for the affected population to reoccupy previously contaminated areas. One of the first SAPs developed was for the determin
Survey of Large Methane Emitters in North America
NASA Astrophysics Data System (ADS)
Deiker, S.
2017-12-01
It has been theorized that methane emissions in the oil and gas industry follow log normal or "fat tail" distributions, with large numbers of small sources for every very large source. Such distributions would have significant policy and operational implications. Unfortunately, by their very nature such distributions would require large sample sizes to verify. Until recently, such large-scale studies would be prohibitively expensive. The largest public study to date sampled 450 wells, an order of magnitude too low to effectively constrain these models. During 2016 and 2017, Kairos Aerospace conducted a series of surveys the LeakSurveyor imaging spectrometer, mounted on light aircraft. This small, lightweight instrument was designed to rapidly locate large emission sources. The resulting survey covers over three million acres of oil and gas production. This includes over 100,000 wells, thousands of storage tanks and over 7,500 miles of gathering lines. This data set allows us to now probe the distribution of large methane emitters. Results of this survey, and implications for methane emission distribution, methane policy and LDAR will be discussed.
Ruple-Czerniak, A; Bolte, D S; Burgess, B A; Morley, P S
2014-07-01
Nosocomial salmonellosis is an important problem in veterinary hospitals that treat horses and other large animals. Detection and mitigation of outbreaks and prevention of healthcare-associated infections often require detection of Salmonella enterica in the hospital environment. To compare 2 previously published methods for detecting environmental contamination with S. enterica in a large animal veterinary teaching hospital. Hospital-based comparison of environmental sampling techniques. A total of 100 pairs of environmental samples were collected from stalls used to house large animal cases (horses, cows or New World camelids) that were confirmed to be shedding S. enterica by faecal culture. Stalls were cleaned and disinfected prior to sampling, and the same areas within each stall were sampled for the paired samples. One method of detection used sterile, premoistened sponges that were cultured using thioglycolate enrichment before plating on XLT-4 agar. The other method used electrostatic wipes that were cultured using buffered peptone water, tetrathionate and Rappaport-Vassiliadis R10 broths before plating on XLT-4 agar. Salmonella enterica was recovered from 14% of samples processed using the electrostatic wipe sampling and culture procedure, whereas S. enterica was recovered from only 4% of samples processed using the sponge sampling and culture procedure. There was test agreement for 85 pairs of culture-negative samples and 3 pairs of culture-positive samples. However, the remaining 12 pairs of samples with discordant results created significant disagreement between the 2 detection methods (P<0.01). Persistence of Salmonella in the environment of veterinary hospitals can occur even with rigorous cleaning and disinfection. Use of sensitive methods for detection of environmental contamination is critical when detecting and mitigating this problem in veterinary hospitals. These results suggest that the electrostatic wipe sampling and culture method was more sensitive than the sponge sampling and culture method. © 2013 EVJ Ltd.
An internal pilot design for prospective cancer screening trials with unknown disease prevalence.
Brinton, John T; Ringham, Brandy M; Glueck, Deborah H
2015-10-13
For studies that compare the diagnostic accuracy of two screening tests, the sample size depends on the prevalence of disease in the study population, and on the variance of the outcome. Both parameters may be unknown during the design stage, which makes finding an accurate sample size difficult. To solve this problem, we propose adapting an internal pilot design. In this adapted design, researchers will accrue some percentage of the planned sample size, then estimate both the disease prevalence and the variances of the screening tests. The updated estimates of the disease prevalence and variance are used to conduct a more accurate power and sample size calculation. We demonstrate that in large samples, the adapted internal pilot design produces no Type I inflation. For small samples (N less than 50), we introduce a novel adjustment of the critical value to control the Type I error rate. We apply the method to two proposed prospective cancer screening studies: 1) a small oral cancer screening study in individuals with Fanconi anemia and 2) a large oral cancer screening trial. Conducting an internal pilot study without adjusting the critical value can cause Type I error rate inflation in small samples, but not in large samples. An internal pilot approach usually achieves goal power and, for most studies with sample size greater than 50, requires no Type I error correction. Further, we have provided a flexible and accurate approach to bound Type I error below a goal level for studies with small sample size.
MicroRNA signatures in B-cell lymphomas
Di Lisio, L; Sánchez-Beato, M; Gómez-López, G; Rodríguez, M E; Montes-Moreno, S; Mollejo, M; Menárguez, J; Martínez, M A; Alves, F J; Pisano, D G; Piris, M A; Martínez, N
2012-01-01
Accurate lymphoma diagnosis, prognosis and therapy still require additional markers. We explore the potential relevance of microRNA (miRNA) expression in a large series that included all major B-cell non-Hodgkin lymphoma (NHL) types. The data generated were also used to identify miRNAs differentially expressed in Burkitt lymphoma (BL) and diffuse large B-cell lymphoma (DLBCL) samples. A series of 147 NHL samples and 15 controls were hybridized on a human miRNA one-color platform containing probes for 470 human miRNAs. Each lymphoma type was compared against the entire set of NHLs. BL was also directly compared with DLBCL, and 43 preselected miRNAs were analyzed in a new series of routinely processed samples of 28 BLs and 43 DLBCLs using quantitative reverse transcription-polymerase chain reaction. A signature of 128 miRNAs enabled the characterization of lymphoma neoplasms, reflecting the lymphoma type, cell of origin and/or discrete oncogene alterations. Comparative analysis of BL and DLBCL yielded 19 differentially expressed miRNAs, which were confirmed in a second confirmation series of 71 paraffin-embedded samples. The set of differentially expressed miRNAs found here expands the range of potential diagnostic markers for lymphoma diagnosis, especially when differential diagnosis of BL and DLBCL is required. PMID:22829247
Mars sample return mission architectures utilizing low thrust propulsion
NASA Astrophysics Data System (ADS)
Derz, Uwe; Seboldt, Wolfgang
2012-08-01
The Mars sample return mission is a flagship mission within ESA's Aurora program and envisioned to take place in the timeframe of 2020-2025. Previous studies developed a mission architecture consisting of two elements, an orbiter and a lander, each utilizing chemical propulsion and a heavy launcher like Ariane 5 ECA. The lander transports an ascent vehicle to the surface of Mars. The orbiter performs a separate impulsive transfer to Mars, conducts a rendezvous in Mars orbit with the sample container, delivered by the ascent vehicle, and returns the samples back to Earth in a small Earth entry capsule. Because the launch of the heavy orbiter by Ariane 5 ECA makes an Earth swing by mandatory for the trans-Mars injection, its total mission time amounts to about 1460 days. The present study takes a fresh look at the subject and conducts a more general mission and system analysis of the space transportation elements including electric propulsion for the transfer. Therefore, detailed spacecraft models for orbiters, landers and ascent vehicles are developed. Based on that, trajectory calculations and optimizations of interplanetary transfers, Mars entries, descents and landings as well as Mars ascents are carried out. The results of the system analysis identified electric propulsion for the orbiter as most beneficial in terms of launch mass, leading to a reduction of launch vehicle requirements and enabling a launch by a Soyuz-Fregat into GTO. Such a sample return mission could be conducted within 1150-1250 days. Concerning the lander, a separate launch in combination with electric propulsion leads to a significant reduction of launch vehicle requirements, but also requires a large number of engines and correspondingly a large power system. Therefore, a lander performing a separate chemical transfer could possibly be more advantageous. Alternatively, a second possible mission architecture has been developed, requiring only one heavy launch vehicle (e.g., Proton). In that case the lander is transported piggyback by the electrically propelled orbiter.
DAQ application of PC oscilloscope for chaos fiber-optic fence system based on LabVIEW
NASA Astrophysics Data System (ADS)
Lu, Manman; Fang, Nian; Wang, Lutang; Huang, Zhaoming; Sun, Xiaofei
2011-12-01
In order to obtain simultaneously high sample rate and large buffer in data acquisition (DAQ) for a chaos fiber-optic fence system, we developed a double-channel high-speed DAQ application of a digital oscilloscope of PicoScope 5203 based on LabVIEW. We accomplished it by creating call library function (CLF) nodes to call the DAQ functions in the two dynamic link libraries (DLLs) of PS5000.dll and PS5000wrap.dll provided by Pico Technology Company. The maximum real-time sample rate of the DAQ application can reach 1GS/s. We can control the resolutions of the application at the sample time and data amplitudes by changing their units in the block diagram, and also control the start and end times of the sampling operations. The experimental results show that the application has enough high sample rate and large buffer to meet the demanding DAQ requirements of the chaos fiber-optic fence system.
Lambertini, Elisabetta; Spencer, Susan K.; Bertz, Phillip D.; Loge, Frank J.; Kieke, Burney A.; Borchardt, Mark A.
2008-01-01
Available filtration methods to concentrate waterborne viruses are either too costly for studies requiring large numbers of samples, limited to small sample volumes, or not very portable for routine field applications. Sodocalcic glass wool filtration is a cost-effective and easy-to-use method to retain viruses, but its efficiency and reliability are not adequately understood. This study evaluated glass wool filter performance to concentrate the four viruses on the U.S. Environmental Protection Agency contaminant candidate list, i.e., coxsackievirus, echovirus, norovirus, and adenovirus, as well as poliovirus. Total virus numbers recovered were measured by quantitative reverse transcription-PCR (qRT-PCR); infectious polioviruses were quantified by integrated cell culture (ICC)-qRT-PCR. Recovery efficiencies averaged 70% for poliovirus, 14% for coxsackievirus B5, 19% for echovirus 18, 21% for adenovirus 41, and 29% for norovirus. Virus strain and water matrix affected recovery, with significant interaction between the two variables. Optimal recovery was obtained at pH 6.5. No evidence was found that water volume, filtration rate, and number of viruses seeded influenced recovery. The method was successful in detecting indigenous viruses in municipal wells in Wisconsin. Long-term continuous filtration retained viruses sufficiently for their detection for up to 16 days after seeding for qRT-PCR and up to 30 days for ICC-qRT-PCR. Glass wool filtration is suitable for large-volume samples (1,000 liters) collected at high filtration rates (4 liters min−1), and its low cost makes it advantageous for studies requiring large numbers of samples. PMID:18359827
Huang, Lei
2015-01-01
To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409
Advances in spectroscopic methods for quantifying soil carbon
Reeves, James B.; McCarty, Gregory W.; Calderon, Francisco; Hively, W. Dean
2012-01-01
The current gold standard for soil carbon (C) determination is elemental C analysis using dry combustion. However, this method requires expensive consumables, is limited by the number of samples that can be processed (~100/d), and is restricted to the determination of total carbon. With increased interest in soil C sequestration, faster methods of analysis are needed, and there is growing interest in methods based on diffuse reflectance spectroscopy in the visible, near-infrared or mid-infrared spectral ranges. These spectral methods can decrease analytical requirements and speed sample processing, be applied to large landscape areas using remote sensing imagery, and be used to predict multiple analytes simultaneously. However, the methods require localized calibrations to establish the relationship between spectral data and reference analytical data, and also have additional, specific problems. For example, remote sensing is capable of scanning entire watersheds for soil carbon content but is limited to the surface layer of tilled soils and may require difficult and extensive field sampling to obtain proper localized calibration reference values. The objective of this chapter is to discuss the present state of spectroscopic methods for determination of soil carbon.
NASA Astrophysics Data System (ADS)
Burritt, Rosemary; Francois, Elizabeth; Windler, Gary; Chavez, David
2017-06-01
Diaminoazoxyfurazan (DAAF) has many of the safety characteristics of an insensitive high explosive (IHE): it is extremely insensitive to impact and friction and is comparable to triaminotrinitrobezene (TATB) in this way. Conversely, it demonstrates many performance characteristics of a Conventional High Explosive (CHE). DAAF has a small failure diameter of about 1.25 mm and can be sensitive to shock under the right conditions. Large particle sized DAAF will not initiate in a typical exploding foil initiator (EFI) configuration but smaller particle sizes will. Large particle sized DAAF, of 40 μm, was crash precipitated and ball milled into six distinct samples and pressed into pellets with a density of 1.60 g/cc (91% TMD). To investigate the effect of particle size and surface area on the direct initiation on DAAF multiple threshold tests were preformed on each sample of DAAF in different EFI configurations, which varied in flyer thickness and/or bridge size. Comparative tests were performed examining threshold voltage and correlated to Photon Doppler Velocimetry (PDV) results. The samples with larger particle sizes and surface area required more energy to initiate while the smaller particle sizes required less energy and could be initiated with smaller diameter flyers.
Nonionic organic contaminants (NOCs) are difficult to measure in the water column due to their inherent chemical properties resulting in low water solubility and high particle activity. Traditional sampling methods require large quantities of water to be extracted and interferen...
Undergraduate Coursework in Economics: A Survey Perspective
ERIC Educational Resources Information Center
Siegfried, John J.; Walstad, William B.
2014-01-01
Survey results from a large sample of economics departments describe offerings for principles courses, coursework requirements for economics majors, and program augmentations such as capstone courses, senior seminars, and honors programs. Findings are reported for all institutions, and institutions are subdivided into six different categories…
A Numerical Climate Observing Network Design Study
NASA Technical Reports Server (NTRS)
Stammer, Detlef
2003-01-01
This project was concerned with three related questions of an optimal design of a climate observing system: 1. The spatial sampling characteristics required from an ARGO system. 2. The degree to which surface observations from ARGO can be used to calibrate and test satellite remote sensing observations of sea surface salinity (SSS) as it is anticipated now. 3. The more general design of an climate observing system as it is required in the near future for CLIVAR in the Atlantic. An important question in implementing an observing system is that of the sampling density required to observe climate-related variations in the ocean. For that purpose this project was concerned with the sampling requirements for the ARGO float system, but investigated also other elements of a climate observing system. As part of this project we studied the horizontal and vertical sampling characteristics of a global ARGO system which is required to make it fully complementary to altimeter data with the goal to capture climate related variations on large spatial scales (less thanAttachment: 1000 km). We addressed this question in the framework of a numerical model study in the North Atlantic with an 1/6 horizontal resolution. The advantage of a numerical design study is the knowledge of the full model state. Sampled by a synthetic float array, model results will therefore allow to test and improve existing deployment strategies with the goal to make the system as optimal and cost-efficient as possible. Attachment: "Optimal observations for variational data assimilation".
Integrated Optical Information Processing
1988-08-01
applications in optical disk memory systems [91. This device is constructed in a glass /SiO2/Si waveguide. The choice of a Si substrate allows for the...contact mask) were formed in the photoresist deposited on all of the samples, we covered the unwanted gratings on each sample with cover glass slides...processing, let us consider TeO2 (v, = 620 m/s) as a potential substrate for applications requiring large time delays. This con- sideration is despite
NASA Technical Reports Server (NTRS)
Nebenfuhr, A.; Lomax, T. L.
1998-01-01
We have developed an improved method for determination of gene expression levels with RT-PCR. The procedure is rapid and does not require extensive optimization or densitometric analysis. Since the detection of individual transcripts is PCR-based, small amounts of tissue samples are sufficient for the analysis of expression patterns in large gene families. Using this method, we were able to rapidly screen nine members of the Aux/IAA family of auxin-responsive genes and identify those genes which vary in message abundance in a tissue- and light-specific manner. While not offering the accuracy of conventional semi-quantitative or competitive RT-PCR, our method allows quick screening of large numbers of genes in a wide range of RNA samples with just a thermal cycler and standard gel analysis equipment.
NASA Astrophysics Data System (ADS)
Nelson, Johanna; Yang, Yuan; Misra, Sumohan; Andrews, Joy C.; Cui, Yi; Toney, Michael F.
2013-09-01
Radiation damage is a topic typically sidestepped in formal discussions of characterization techniques utilizing ionizing radiation. Nevertheless, such damage is critical to consider when planning and performing experiments requiring large radiation doses or radiation sensitive samples. High resolution, in situ transmission X-ray microscopy of Li-ion batteries involves both large X-ray doses and radiation sensitive samples. To successfully identify changes over time solely due to an applied current, the effects of radiation damage must be identified and avoided. Although radiation damage is often significantly sample and instrument dependent, the general procedure to identify and minimize damage is transferable. Here we outline our method of determining and managing the radiation damage observed in lithium sulfur batteries during in situ X-ray imaging on the transmission X-ray microscope at Stanford Synchrotron Radiation Lightsource.
Satellite orbit and data sampling requirements
NASA Technical Reports Server (NTRS)
Rossow, William
1993-01-01
Climate forcings and feedbacks vary over a wide range of time and space scales. The operation of non-linear feedbacks can couple variations at widely separated time and space scales and cause climatological phenomena to be intermittent. Consequently, monitoring of global, decadal changes in climate requires global observations that cover the whole range of space-time scales and are continuous over several decades. The sampling of smaller space-time scales must have sufficient statistical accuracy to measure the small changes in the forcings and feedbacks anticipated in the next few decades, while continuity of measurements is crucial for unambiguous interpretation of climate change. Shorter records of monthly and regional (500-1000 km) measurements with similar accuracies can also provide valuable information about climate processes, when 'natural experiments' such as large volcanic eruptions or El Ninos occur. In this section existing satellite datasets and climate model simulations are used to test the satellite orbits and sampling required to achieve accurate measurements of changes in forcings and feedbacks at monthly frequency and 1000 km (regional) scale.
Internet cognitive testing of large samples needed in genetic research.
Haworth, Claire M A; Harlaar, Nicole; Kovas, Yulia; Davis, Oliver S P; Oliver, Bonamy R; Hayiou-Thomas, Marianna E; Frances, Jane; Busfield, Patricia; McMillan, Andrew; Dale, Philip S; Plomin, Robert
2007-08-01
Quantitative and molecular genetic research requires large samples to provide adequate statistical power, but it is expensive to test large samples in person, especially when the participants are widely distributed geographically. Increasing access to inexpensive and fast Internet connections makes it possible to test large samples efficiently and economically online. Reliability and validity of Internet testing for cognitive ability have not been previously reported; these issues are especially pertinent for testing children. We developed Internet versions of reading, language, mathematics and general cognitive ability tests and investigated their reliability and validity for 10- and 12-year-old children. We tested online more than 2500 pairs of 10-year-old twins and compared their scores to similar internet-based measures administered online to a subsample of the children when they were 12 years old (> 759 pairs). Within 3 months of the online testing at 12 years, we administered standard paper and pencil versions of the reading and mathematics tests in person to 30 children (15 pairs of twins). Scores on Internet-based measures at 10 and 12 years correlated .63 on average across the two years, suggesting substantial stability and high reliability. Correlations of about .80 between Internet measures and in-person testing suggest excellent validity. In addition, the comparison of the internet-based measures to ratings from teachers based on criteria from the UK National Curriculum suggests good concurrent validity for these tests. We conclude that Internet testing can be reliable and valid for collecting cognitive test data on large samples even for children as young as 10 years.
Bartsch, L.A.; Richardson, W.B.; Naimo, T.J.
1998-01-01
Estimation of benthic macroinvertebrate populations over large spatial scales is difficult due to the high variability in abundance and the cost of sample processing and taxonomic analysis. To determine a cost-effective, statistically powerful sample design, we conducted an exploratory study of the spatial variation of benthic macroinvertebrates in a 37 km reach of the Upper Mississippi River. We sampled benthos at 36 sites within each of two strata, contiguous backwater and channel border. Three standard ponar (525 cm(2)) grab samples were obtained at each site ('Original Design'). Analysis of variance and sampling cost of strata-wide estimates for abundance of Oligochaeta, Chironomidae, and total invertebrates showed that only one ponar sample per site ('Reduced Design') yielded essentially the same abundance estimates as the Original Design, while reducing the overall cost by 63%. A posteriori statistical power analysis (alpha = 0.05, beta = 0.20) on the Reduced Design estimated that at least 18 sites per stratum were needed to detect differences in mean abundance between contiguous backwater and channel border areas for Oligochaeta, Chironomidae, and total invertebrates. Statistical power was nearly identical for the three taxonomic groups. The abundances of several taxa of concern (e.g., Hexagenia mayflies and Musculium fingernail clams) were too spatially variable to estimate power with our method. Resampling simulations indicated that to achieve adequate sampling precision for Oligochaeta, at least 36 sample sites per stratum would be required, whereas a sampling precision of 0.2 would not be attained with any sample size for Hexagenia in channel border areas, or Chironomidae and Musculium in both strata given the variance structure of the original samples. Community-wide diversity indices (Brillouin and 1-Simpsons) increased as sample area per site increased. The backwater area had higher diversity than the channel border area. The number of sampling sites required to sample benthic macroinvertebrates during our sampling period depended on the study objective and ranged from 18 to more than 40 sites per stratum. No single sampling regime would efficiently and adequately sample all components of the macroinvertebrate community.
Low-sample flow secondary electrospray ionization: improving vapor ionization efficiency.
Vidal-de-Miguel, G; Macía, M; Pinacho, P; Blanco, J
2012-10-16
In secondary electrospray ionization (SESI) systems, gaseous analytes exposed to an elecrospray plume become ionized after charge is transferred from the charging electrosprayed particles to the sample species. Current SESI systems have shown a certain potential. However, their ionization efficiency is limited by space charge repulsion and by the high sample flows required to prevent vapor dilution. As a result, they have a poor conversion ratio of vapor into ions. We have developed and tested a new SESI configuration, termed low-flow SESI, that permits the reduction of the required sample flows. Although the ion to vapor concentration ratio is limited, the ionic flow to sample vapor flow ratio theoretically is not. The new ionizer is coupled to a planar differential mobility analyzer (DMA) and requires only 0.2 lpm of vapor sample flow to produce 3.5 lpm of ionic flow. The achieved ionization efficiency is 1/700 (one ion for every 700 molecules) for TNT and, thus, compared with previous SESI ionizers coupled with atmospheric pressure ionization-mass spectrometry (API-MS) (Mesonero, E.; Sillero, J. A.; Hernández, M.; Fernandez de la Mora, J. Philadelphia PA, 2009) has been improved by a large factor of at least 50-100 (our measurements indicate 70). The new ionizer coupled with the planar DMA and a triple quadrupole mass spectrometer (ABSciex API5000) requires only 20 fg (50 million molecules) to produce a discernible signal after mobility and MS(2) analysis.
NASA Astrophysics Data System (ADS)
Hurst, A.; Bowden, S. A.; Parnell, J.; Burchell, M. J.; Ball, A. J.
2007-12-01
There are a number of measurements relevant to planetary geology that can only be adequately performed by physically contacting a sample. This necessitates landing on the surface of a moon or planetary body or returning samples to earth. The need to physically contact a sample is particularly important in the case of measurements that could detect medium to low concentrations of large organic molecules present in surface materials. Large organic molecules, although a trace component of many meteoritic materials and rocks on the surface of earth, carry crucial information concerning the processing of meteoritic material in the surface and subsurface environments, and can be crucial indicators for the presence of life. Unfortunately landing on the surface of a small planetary body or moon is complicated, particularly if surface topography is only poorly characterised and the atmosphere thin thus requiring a propulsion system for a soft landing. One alternative to a surface landing may be to use an impactor launched from an orbiting spacecraft to launch material from the planets surface and shallow sub-surface into orbit. Ejected material could then be collected by a follow-up spacecraft and analyzed. The mission scenario considered in the Europa-Ice Clipper mission proposal included both sample return and the analysis of captured particles. Employing such a sampling procedure to analyse large organic molecules is only viable if large organic molecules present in ices survive hypervelocity impacts (HVIs). To investigate the survival of large organic molecules in HVIs with icy bodies a two stage light air gas gun was used to fire steel projectiles (1-1.5 mm diameter) at samples of water ice containing large organic molecules (amino acids, anthracene and beta-carotene a biological pigment) at velocities > 4.8 km/s.UV-VIS spectroscopy of ejected material detected beta-carotene indicating large organic molecules can survive hypervelocity impacts. These preliminary results are yet to be scaled up to a point where they can be accurately interpreted in the context of a likely mission scenario. However, they strongly indicate that in a low mass payload mission scenario where a lander has been considered unfeasible, such a sampling strategy merits further consideration.
Determinations of pesticides in food are often complicated by the presence of fats and require multiple cleanup steps before analysis. Cost-effective analytical methods are needed for conducting large-scale exposure studies. We examined two extraction methods, supercritical flu...
Automated blood-sample handling in the clinical laboratory.
Godolphin, W; Bodtker, K; Uyeno, D; Goh, L O
1990-09-01
The only significant advances in blood-taking in 25 years have been the disposable needle and evacuated blood-drawing tube. With the exception of a few isolated barcode experiments, most sample-tracking is performed through handwritten or computer-printed labels. Attempts to reduce the hazards of centrifugation have resulted in air-tight lids or chambers, the use of which is time-consuming and cumbersome. Most commonly used clinical analyzers require serum or plasma, distributed into specialized containers, unique to that analyzer. Aliquots for different tests are prepared by handpouring or pipetting. Moderate to large clinical laboratories perform so many different tests that even multi-analyzers performing multiple analyses on a single sample may account for only a portion of all tests ordered for a patient. Thus several aliquots of each specimen are usually required. We have developed a proprietary serial centrifuge and blood-collection tube suitable for incorporation into an automated or robotic sample-handling system. The system we propose is (a) safe--avoids or prevents biological danger to the many "handlers" of blood; (b) small--minimizes the amount of sample taken and space required to adapt to the needs of satellite and mobile testing, and direct interfacing with analyzers; (c) serial--permits each sample to be treated according to its own "merits," optimizes throughput, and facilitates flexible automation; and (d) smart--ensures quality results through monitoring and intelligent control of patient identification, sample characteristics, and separation process.
Glass wool filters for concentrating waterborne viruses and agricultural zoonotic pathogens
Millen, Hana T.; Gonnering, Jordan C.; Berg, Ryan K.; Spencer, Susan K.; Jokela, William E.; Pearce, John M.; Borchardt, Jackson S.; Borchardt, Mark A.
2012-01-01
The key first step in evaluating pathogen levels in suspected contaminated water is concentration. Concentration methods tend to be specific for a particular pathogen group, for example US Environmental Protection Agency Method 1623 for Giardia and Cryptosporidium1, which means multiple methods are required if the sampling program is targeting more than one pathogen group. Another drawback of current methods is the equipment can be complicated and expensive, for example the VIRADEL method with the 1MDS cartridge filter for concentrating viruses2. In this article we describe how to construct glass wool filters for concentrating waterborne pathogens. After filter elution, the concentrate is amenable to a second concentration step, such as centrifugation, followed by pathogen detection and enumeration by cultural or molecular methods. The filters have several advantages. Construction is easy and the filters can be built to any size for meeting specific sampling requirements. The filter parts are inexpensive, making it possible to collect a large number of samples without severely impacting a project budget. Large sample volumes (100s to 1,000s L) can be concentrated depending on the rate of clogging from sample turbidity. The filters are highly portable and with minimal equipment, such as a pump and flow meter, they can be implemented in the field for sampling finished drinking water, surface water, groundwater, and agricultural runoff. Lastly, glass wool filtration is effective for concentrating a variety of pathogen types so only one method is necessary. Here we report on filter effectiveness in concentrating waterborne human enterovirus, Salmonella enterica, Cryptosporidium parvum, and avian influenza virus.
Glass Wool Filters for Concentrating Waterborne Viruses and Agricultural Zoonotic Pathogens
Millen, Hana T.; Gonnering, Jordan C.; Berg, Ryan K.; Spencer, Susan K.; Jokela, William E.; Pearce, John M.; Borchardt, Jackson S.; Borchardt, Mark A.
2012-01-01
The key first step in evaluating pathogen levels in suspected contaminated water is concentration. Concentration methods tend to be specific for a particular pathogen group, for example US Environmental Protection Agency Method 1623 for Giardia and Cryptosporidium1, which means multiple methods are required if the sampling program is targeting more than one pathogen group. Another drawback of current methods is the equipment can be complicated and expensive, for example the VIRADEL method with the 1MDS cartridge filter for concentrating viruses2. In this article we describe how to construct glass wool filters for concentrating waterborne pathogens. After filter elution, the concentrate is amenable to a second concentration step, such as centrifugation, followed by pathogen detection and enumeration by cultural or molecular methods. The filters have several advantages. Construction is easy and the filters can be built to any size for meeting specific sampling requirements. The filter parts are inexpensive, making it possible to collect a large number of samples without severely impacting a project budget. Large sample volumes (100s to 1,000s L) can be concentrated depending on the rate of clogging from sample turbidity. The filters are highly portable and with minimal equipment, such as a pump and flow meter, they can be implemented in the field for sampling finished drinking water, surface water, groundwater, and agricultural runoff. Lastly, glass wool filtration is effective for concentrating a variety of pathogen types so only one method is necessary. Here we report on filter effectiveness in concentrating waterborne human enterovirus, Salmonella enterica, Cryptosporidium parvum, and avian influenza virus. PMID:22415031
Irish study of high-density Schizophrenia families: Field methods and power to detect linkage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kendler, K.S.; Straub, R.E.; MacLean, C.J.
Large samples of multiplex pedigrees will probably be needed to detect susceptibility loci for schizophrenia by linkage analysis. Standardized ascertainment of such pedigrees from culturally and ethnically homogeneous populations may improve the probability of detection and replication of linkage. The Irish Study of High-Density Schizophrenia Families (ISHDSF) was formed from standardized ascertainment of multiplex schizophrenia families in 39 psychiatric facilities covering over 90% of the population in Ireland and Northern Ireland. We here describe a phenotypic sample and a subset thereof, the linkage sample. Individuals were included in the phenotypic sample if adequate diagnostic information, based on personal interview and/ormore » hospital record, was available. Only individuals with available DNA were included in the linkage sample. Inclusion of a pedigree into the phenotypic sample required at least two first, second, or third degree relatives with non-affective psychosis (NAP), one of whom had schizophrenia (S) or poor-outcome schizoaffective disorder (PO-SAD). Entry into the linkage sample required DNA samples on at least two individuals with NAP, of whom at least one had S or PO-SAD. Affection was defined by narrow, intermediate, and broad criteria. 75 refs., 6 tabs.« less
Maret, Terry R.; Ott, D.S.
2004-01-01
width was determined to be sufficient for collecting an adequate number of fish to estimate species richness and evaluate biotic integrity. At most sites, about 250 fish were needed to effectively represent 95 percent of the species present. Fifty-three percent of the sites assessed, using an IBI developed specifically for large Idaho rivers, received scores of less than 50, indicating poor biotic integrity.
Onda, Yuichi; Kato, Hiroaki; Hoshi, Masaharu; Takahashi, Yoshio; Nguyen, Minh-Long
2015-01-01
The Fukushima Dai-ichi Nuclear Power Plant (FDNPP) accident resulted in extensive radioactive contamination of the environment via deposited radionuclides such as radiocesium and (131)I. Evaluating the extent and level of environmental contamination is critical to protecting citizens in affected areas and to planning decontamination efforts. However, a standardized soil sampling protocol is needed in such emergencies to facilitate the collection of large, tractable samples for measuring gamma-emitting radionuclides. In this study, we developed an emergency soil sampling protocol based on preliminary sampling from the FDNPP accident-affected area. We also present the results of a preliminary experiment aimed to evaluate the influence of various procedures (e.g., mixing, number of samples) on measured radioactivity. Results show that sample mixing strongly affects measured radioactivity in soil samples. Furthermore, for homogenization, shaking the plastic sample container at least 150 times or disaggregating soil by hand-rolling in a disposable plastic bag is required. Finally, we determined that five soil samples within a 3 m × 3-m area are the minimum number required for reducing measurement uncertainty in the emergency soil sampling protocol proposed here. Copyright © 2014 Elsevier Ltd. All rights reserved.
Assessment of sampling stability in ecological applications of discriminant analysis
Williams, B.K.; Titus, K.
1988-01-01
A simulation study was undertaken to assess the sampling stability of the variable loadings in linear discriminant function analysis. A factorial design was used for the factors of multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. A review of 60 published studies and 142 individual analyses indicated that sample sizes in ecological studies often have met that requirement. However, individual group sample sizes frequently were very unequal, and checks of assumptions usually were not reported. The authors recommend that ecologists obtain group sample sizes that are at least three times as large as the number of variables measured.
Gaussian process based intelligent sampling for measuring nano-structure surfaces
NASA Astrophysics Data System (ADS)
Sun, L. J.; Ren, M. J.; Yin, Y. H.
2016-09-01
Nanotechnology is the science and engineering that manipulate matters at nano scale, which can be used to create many new materials and devices with a vast range of applications. As the nanotech product increasingly enters the commercial marketplace, nanometrology becomes a stringent and enabling technology for the manipulation and the quality control of the nanotechnology. However, many measuring instruments, for instance scanning probe microscopy, are limited to relatively small area of hundreds of micrometers with very low efficiency. Therefore some intelligent sampling strategies should be required to improve the scanning efficiency for measuring large area. This paper presents a Gaussian process based intelligent sampling method to address this problem. The method makes use of Gaussian process based Bayesian regression as a mathematical foundation to represent the surface geometry, and the posterior estimation of Gaussian process is computed by combining the prior probability distribution with the maximum likelihood function. Then each sampling point is adaptively selected by determining the position which is the most likely outside of the required tolerance zone among the candidates and then inserted to update the model iteratively. Both simulationson the nominal surface and manufactured surface have been conducted on nano-structure surfaces to verify the validity of the proposed method. The results imply that the proposed method significantly improves the measurement efficiency in measuring large area structured surfaces.
Preservation of samples for dissolved mercury
Hamlin, S.N.
1989-01-01
Water samples for dissolved mercury requires special treatment because of the high chemical mobility and volatility of this element. Widespread use of mercury and its compounds has provided many avenues for contamination of water. Two laboratory tests were done to determine the relative permeabilities of glass and plastic sample bottles to mercury vapor. Plastic containers were confirmed to be quite permeable to airborne mercury, glass containers were virtually impermeable. Methods of preservation include the use of various combinations of acids, oxidants, and complexing agents. The combination of nitric acid and potassium dichromate successfully preserved mercury in a large variety of concentrations and dissolved forms. Because this acid-oxidant preservative acts as a sink for airborne mercury and plastic containers are permeable to mercury vapor, glass bottles are preferred for sample collection. To maintain a healthy work environment and minimize the potential for contamination of water samples, mercury and its compounds are isolated from the atmosphere while in storage. Concurrently, a program to monitor environmental levels of mercury vapor in areas of potential contamination is needed to define the extent of mercury contamination and to assess the effectiveness of mercury clean-up procedures.Water samples for dissolved mercury require special treatment because of the high chemical mobility and volatility of this element. Widespread use of mercury and its compounds has provided many avenues for contamination of water. Two laboratory tests were done to determine the relative permeabilities of glass and plastic sample bottles to mercury vapor. Plastic containers were confirmed to be quite permeable to airborne mercury, glass containers were virtually impermeable. Methods of preservation include the use of various combinations of acids, oxidants, and complexing agents. The combination of nitric acid and potassium dichromate successfully preserved mercury in a large variety of concentrations and dissolved forms.
Nonionic organic contaminants (NOCs) are difficult to measure in the water column due to their inherent chemical properties resulting in low water solubility and high particle activity. Traditional sampling methods require large quantities of water to be extracted and interferen...
Nonionic organic contaminants (NOCs) are difficult to measure in the water column due to their inherent chemical properties resulting in low water solubility and high particle activity. Traditional sampling methods require large quantities of water to be extracted and interferen...
Water-Oriented Recreational Demand and Projections: Calculations for Western Lake Superior.
1978-06-15
Documented boats-every 36th registration. The stratified sample gives maximum descrimination to boats over 20 feet in length. These are the boats that are...large boats. Because documented boats, by their nature, are rarely trailered, it was believed that less descrimination was required. Therefore, a smaller
Evaluation of PLS, LS-SVM, and LWR for quantitative spectroscopic analysis of soils
USDA-ARS?s Scientific Manuscript database
Soil testing requires the analysis of large numbers of samples in laboratory that are often time consuming and expensive. Mid-infrared spectroscopy (mid-IR) and near-infrared spectroscopy (NIRS) are fast, non-destructive, and inexpensive analytical methods that have been used for soil analysis, in l...
USDA-ARS?s Scientific Manuscript database
Viruses are the cause of many waterborne diseases contracted from fecal-contaminated waters. Collection of samples that properly represent virus concentrations throughout relevant hydrologic periods has historically been difficult due to the large water volume collection and filtration required for ...
USDA-ARS?s Scientific Manuscript database
Determining seed quality parameters is an integral part of cultivar improvement and germplasm screening. However, quality tests are often time cnsuming, seed destructive, and can require large seed samples. This study describes the development of near-infrared spectroscopy (NIRS) calibrations to mea...
USDA-ARS?s Scientific Manuscript database
Membrane bioreactors (MBR), used for wastewater treatment in Ohio and elsewhere in the United States, have pore sizes large enough to theoretically reduce concentrations of protozoa and bacteria, but not viruses. Sampling for viruses in wastewater is seldom done and not required. Instead, the bac...
Delaying Developmental Mathematics: The Characteristics and Costs
ERIC Educational Resources Information Center
Johnson, Marianne; Kuennen, Eric
2004-01-01
This paper investigates which students delay taking a required developmental mathematics course and the impact of delay on student performance in introductory microeconomics. Analysis of a sample of 1462 students at a large Midwestern university revealed that, although developmental-level mathematics students did not reach the same level of…
Jensen, Pamela C.; Purcell, Maureen K.; Morado, J. Frank; Eckert, Ginny L.
2012-01-01
The Alaskan red king crab (Paralithodes camtschaticus) fishery was once one of the most economically important single-species fisheries in the world, but is currently depressed. This fishery would benefit from improved stock assessment capabilities. Larval crab distribution is patchy temporally and spatially, requiring extensive sampling efforts to locate and track larval dispersal. Large-scale plankton surveys are generally cost prohibitive because of the effort required for collection and the time and taxonomic expertise required to sort samples to identify plankton individually via light microscopy. Here, we report the development of primers and a dual-labeled probe for use in a DNA-based real-time polymerase chain reaction assay targeting the red king crab, mitochondrial gene cytochrome oxidase I for the detection of red king crab larvae DNA in plankton samples. The assay allows identification of plankton samples containing crab larvae DNA and provides an estimate of DNA copy number present in a sample without sorting the plankton sample visually. The assay was tested on DNA extracted from whole red king crab larvae and plankton samples seeded with whole larvae, and it detected DNA copies equivalent to 1/10,000th of a larva and 1 crab larva/5mL sieved plankton, respectively. The real-time polymerase chain reaction assay can be used to screen plankton samples for larvae in a fraction of the time required for traditional microscopial methods, which offers advantages for stock assessment methodologies for red king crab as well as a rapid and reliable method to assess abundance of red king crab larvae as needed to improve the understanding of life history and population processes, including larval population dynamics.
Design of a practical model-observer-based image quality assessment method for CT imaging systems
NASA Astrophysics Data System (ADS)
Tseng, Hsin-Wu; Fan, Jiahua; Cao, Guangzhi; Kupinski, Matthew A.; Sainath, Paavana
2014-03-01
The channelized Hotelling observer (CHO) is a powerful method for quantitative image quality evaluations of CT systems and their image reconstruction algorithms. It has recently been used to validate the dose reduction capability of iterative image-reconstruction algorithms implemented on CT imaging systems. The use of the CHO for routine and frequent system evaluations is desirable both for quality assurance evaluations as well as further system optimizations. The use of channels substantially reduces the amount of data required to achieve accurate estimates of observer performance. However, the number of scans required is still large even with the use of channels. This work explores different data reduction schemes and designs a new approach that requires only a few CT scans of a phantom. For this work, the leave-one-out likelihood (LOOL) method developed by Hoffbeck and Landgrebe is studied as an efficient method of estimating the covariance matrices needed to compute CHO performance. Three different kinds of approaches are included in the study: a conventional CHO estimation technique with a large sample size, a conventional technique with fewer samples, and the new LOOL-based approach with fewer samples. The mean value and standard deviation of area under ROC curve (AUC) is estimated by shuffle method. Both simulation and real data results indicate that an 80% data reduction can be achieved without loss of accuracy. This data reduction makes the proposed approach a practical tool for routine CT system assessment.
Laboratory theory and methods for sediment analysis
Guy, Harold P.
1969-01-01
The diverse character of fluvial sediments makes the choice of laboratory analysis somewhat arbitrary and the pressing of sediment samples difficult. This report presents some theories and methods used by the Water Resources Division for analysis of fluvial sediments to determine the concentration of suspended-sediment samples and the particle-size distribution of both suspended-sediment and bed-material samples. Other analyses related to these determinations may include particle shape, mineral content, and specific gravity, the organic matter and dissolved solids of samples, and the specific weight of soils. The merits and techniques of both the evaporation and filtration methods for concentration analysis are discussed. Methods used for particle-size analysis of suspended-sediment samples may include the sieve pipet, the VA tube-pipet, or the BW tube-VA tube depending on the equipment available, the concentration and approximate size of sediment in the sample, and the settling medium used. The choice of method for most bed-material samples is usually limited to procedures suitable for sand or to some type of visual analysis for large sizes. Several tested forms are presented to help insure a well-ordered system in the laboratory to handle the samples, to help determine the kind of analysis required for each, to conduct the required processes, and to assist in the required computations. Use of the manual should further 'standardize' methods of fluvial sediment analysis among the many laboratories and thereby help to achieve uniformity and precision of the data.
NASA Technical Reports Server (NTRS)
Mukhopadhyay, V.
1988-01-01
A generic procedure for the parameter optimization of a digital control law for a large-order flexible flight vehicle or large space structure modeled as a sampled data system is presented. A linear quadratic Guassian type cost function was minimized, while satisfying a set of constraints on the steady-state rms values of selected design responses, using a constrained optimization technique to meet multiple design requirements. Analytical expressions for the gradients of the cost function and the design constraints on mean square responses with respect to the control law design variables are presented.
Corstjens, Paul L A M; Hoekstra, Pytsje T; de Dood, Claudia J; van Dam, Govert J
2017-11-01
Methodological applications of the high sensitivity genus-specific Schistosoma CAA strip test, allowing detection of single worm active infections (ultimate sensitivity), are discussed for efficient utilization in sample pooling strategies. Besides relevant cost reduction, pooling of samples rather than individual testing can provide valuable data for large scale mapping, surveillance, and monitoring. The laboratory-based CAA strip test utilizes luminescent quantitative up-converting phosphor (UCP) reporter particles and a rapid user-friendly lateral flow (LF) assay format. The test includes a sample preparation step that permits virtually unlimited sample concentration with urine, reaching ultimate sensitivity (single worm detection) at 100% specificity. This facilitates testing large urine pools from many individuals with minimal loss of sensitivity and specificity. The test determines the average CAA level of the individuals in the pool thus indicating overall worm burden and prevalence. When requiring test results at the individual level, smaller pools need to be analysed with the pool-size based on expected prevalence or when unknown, on the average CAA level of a larger group; CAA negative pools do not require individual test results and thus reduce the number of tests. Straightforward pooling strategies indicate that at sub-population level the CAA strip test is an efficient assay for general mapping, identification of hotspots, determination of stratified infection levels, and accurate monitoring of mass drug administrations (MDA). At the individual level, the number of tests can be reduced i.e. in low endemic settings as the pool size can be increased as opposed to prevalence decrease. At the sub-population level, average CAA concentrations determined in urine pools can be an appropriate measure indicating worm burden. Pooling strategies allowing this type of large scale testing are feasible with the various CAA strip test formats and do not affect sensitivity and specificity. It allows cost efficient stratified testing and monitoring of worm burden at the sub-population level, ideally for large-scale surveillance generating hard data for performance of MDA programs and strategic planning when moving towards transmission-stop and elimination.
NASA Astrophysics Data System (ADS)
Hanasaki, Itsuo; Kawano, Satoyuki
2013-11-01
Motility of bacteria is usually recognized in the trajectory data and compared with Brownian motion, but the diffusion coefficient is insufficient to evaluate it. In this paper, we propose a method based on the large deviation principle. We show that it can be used to evaluate the non-Gaussian characteristics of model Escherichia coli motions and to distinguish combinations of the mean running duration and running speed that lead to the same diffusion coefficient. Our proposed method does not require chemical stimuli to induce the chemotaxis in a specific direction, and it is applicable to various types of self-propelling motions for which no a priori information of, for example, threshold parameters for run and tumble or head/tail direction is available. We also address the issue of the finite-sample effect on the large deviation quantities, but we propose to make use of it to characterize the nature of motility.
NASA Technical Reports Server (NTRS)
Pitts, D. E.; Badhwar, G.
1980-01-01
The development of agricultural remote sensing systems requires knowledge of agricultural field size distributions so that the sensors, sampling frames, image interpretation schemes, registration systems, and classification systems can be properly designed. Malila et al. (1976) studied the field size distribution for wheat and all other crops in two Kansas LACIE (Large Area Crop Inventory Experiment) intensive test sites using ground observations of the crops and measurements of their field areas based on current year rectified aerial photomaps. The field area and size distributions reported in the present investigation are derived from a representative subset of a stratified random sample of LACIE sample segments. In contrast to previous work, the obtained results indicate that most field-size distributions are not log-normally distributed. The most common field size observed in this study was 10 acres for most crops studied.
Pore water sampling in acid sulfate soils: a new peeper method.
Johnston, Scott G; Burton, Edward D; Keene, Annabelle F; Bush, Richard T; Sullivan, Leigh A; Isaacson, Lloyd
2009-01-01
This study describes the design, deployment, and application of a modified equilibration dialysis device (peeper) optimized for sampling pore waters in acid sulfate soils (ASS). The modified design overcomes the limitations of traditional-style peepers, when sampling firm ASS materials over relatively large depth intervals. The new peeper device uses removable, individual cells of 25 mL volume housed in a 1.5 m long rigid, high-density polyethylene rod. The rigid housing structure allows the device to be inserted directly into relatively firm soils without requiring a supporting frame. The use of removable cells eliminates the need for a large glove-box after peeper retrieval, thus simplifying physical handling. Removable cells are easily maintained in an inert atmosphere during sample processing and the 25-mL sample volume is sufficient for undertaking multiple analyses. A field evaluation of equilibration times indicates that 32 to 38 d of deployment was necessary. Overall, the modified method is simple and effective and well suited to acquisition and processing of redox-sensitive pore water profiles>1 m deep in acid sulfate soil or any other firm wetland soils.
Mixing problems in using indicators for measuring regional blood flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ushioda, E.; Nuwayhid, B.; Tabsh, K.
A basic requirement for using indicators for measuring blood flow is adequate mixing of the indicator with blood prior to sampling the site. This requirement has been met by depositing the indicator in the heart and sampling from an artery. Recently, authors have injected microspheres into veins and sampled from venous sites. The present studies were designed to investigate the mixing problems in sheep and rabbits by means of Cardio-Green and labeled microspheres. The indicators were injected at different points in the circulatory system, and blood was sampled at different levels of the venous and arterial systems. Results show themore » following: (a) When an indicator of small molecular size (Cardio-Green) is allowed to pass through the heart chambers, adequate mixing is achieved, yielding accurate and reproducible results. (b) When any indicator (Cardio-Green or microspheres) is injected into veins, and sampling is done at any point in the venous system, mixing is inadequate, yielding flow results which are inconsistent and erratic. (c) For an indicator or large molecular size (microspheres), injecting into the left side of the heart and sampling from arterial sites yield accurate and reproducible results regardless of whether blood is sampled continuously or intermittently.« less
NASA Astrophysics Data System (ADS)
OBrien, R. E.; Ridley, K. J.; Canagaratna, M. R.; Croteau, P.; Budisulistiorini, S. H.; Cui, T.; Green, H. S.; Surratt, J. D.; Jayne, J. T.; Kroll, J. H.
2016-12-01
A thorough understanding of the sources, evolution, and budgets of atmospheric organic aerosol requires widespread measurements of the amount and chemical composition of atmospheric organic carbon in the condensed phase (within particles and water droplets). Collecting such datasets requires substantial spatial and temporal (long term) coverage, which can be challenging when relying on online measurements by state-of-the-art research-grade instrumentation (such as those used in atmospheric chemistry field studies). Instead, samples are routinely collected using relatively low-cost techniques, such as aerosol filters, for offline analysis of their chemical composition. However, measurements made by online and offline instruments can be fundamentally different, leading to disparities between data from field studies and those from more routine monitoring. To better connect these two approaches, and take advantage of the benefits of each, we have developed a method to introduce collected samples into online aerosol instruments using nebulization. Because nebulizers typically require tens to hundreds of milliliters of solution, limiting this technique to large samples, we developed a new, ultrasonic micro-nebulizer that requires only small volumes (tens of microliters) of sample for chemical analysis. The nebulized (resuspended) sample is then sent into a high-resolution Aerosol Mass Spectrometer (AMS), a widely-used instrument that provides key information on the chemical composition of aerosol particulate matter (elemental ratios, carbon oxidation state, etc.), measurements that are not typically made for collected atmospheric samples. Here, we compare AMS data collected using standard on-line techniques with our offline analysis, demonstrating the utility of this new technique to aerosol filter samples. We then apply this approach to organic aerosol filter samples collected in remote regions, as well as rainwater samples from across the US. This data provides information on the sample composition and changes in key chemical characteristics across locations and seasons.
A simple microviscometric approach based on Brownian motion tracking.
Hnyluchová, Zuzana; Bjalončíková, Petra; Karas, Pavel; Mravec, Filip; Halasová, Tereza; Pekař, Miloslav; Kubala, Lukáš; Víteček, Jan
2015-02-01
Viscosity-an integral property of a liquid-is traditionally determined by mechanical instruments. The most pronounced disadvantage of such an approach is the requirement of a large sample volume, which poses a serious obstacle, particularly in biology and biophysics when working with limited samples. Scaling down the required volume by means of microviscometry based on tracking the Brownian motion of particles can provide a reasonable alternative. In this paper, we report a simple microviscometric approach which can be conducted with common laboratory equipment. The core of this approach consists in a freely available standalone script to process particle trajectory data based on a Newtonian model. In our study, this setup allowed the sample to be scaled down to 10 μl. The utility of the approach was demonstrated using model solutions of glycerine, hyaluronate, and mouse blood plasma. Therefore, this microviscometric approach based on a newly developed freely available script can be suggested for determination of the viscosity of small biological samples (e.g., body fluids).
Cultural influences on personality.
Triandis, Harry C; Suh, Eunkook M
2002-01-01
Ecologies shape cultures; cultures influence the development of personalities. There are both universal and culture-specific aspects of variation in personality. Some culture-specific aspects correspond to cultural syndromes such as complexity, tightness, individualism, and collectivism. A large body of literature suggests that the Big Five personality factors emerge in various cultures. However, caution is required in arguing for such universality, because most studies have not included emic (culture-specific) traits and have not studied samples that are extremely different in culture from Western samples.
NASA Astrophysics Data System (ADS)
Sun, Chengjun; Jiang, Fenghua; Gao, Wei; Li, Xiaoyun; Yu, Yanzhen; Yin, Xiaofei; Wang, Yong; Ding, Haibing
2017-01-01
Detection of sulfur-oxidizing bacteria has largely been dependent on targeted gene sequencing technology or traditional cell cultivation, which usually takes from days to months to carry out. This clearly does not meet the requirements of analysis for time-sensitive samples and/or complicated environmental samples. Since energy-dispersive X-ray spectrometry (EDS) can be used to simultaneously detect multiple elements in a sample, including sulfur, with minimal sample treatment, this technology was applied to detect sulfur-oxidizing bacteria using their high sulfur content within the cell. This article describes the application of scanning electron microscopy imaging coupled with EDS mapping for quick detection of sulfur oxidizers in contaminated environmental water samples, with minimal sample handling. Scanning electron microscopy imaging revealed the existence of dense granules within the bacterial cells, while EDS identified large amounts of sulfur within them. EDS mapping localized the sulfur to these granules. Subsequent 16S rRNA gene sequencing showed that the bacteria detected in our samples belonged to the genus Chromatium, which are sulfur oxidizers. Thus, EDS mapping made it possible to identify sulfur oxidizers in environmental samples based on localized sulfur within their cells, within a short time (within 24 h of sampling). This technique has wide ranging applications for detection of sulfur bacteria in environmental water samples.
Wei, Binnian; Feng, June; Rehmani, Imran J; Miller, Sharyn; McGuffey, James E; Blount, Benjamin C; Wang, Lanqing
2014-09-25
Most sample preparation methods characteristically involve intensive and repetitive labor, which is inefficient when preparing large numbers of samples from population-scale studies. This study presents a robotic system designed to meet the sampling requirements for large population-scale studies. Using this robotic system, we developed and validated a method to simultaneously measure urinary anatabine, anabasine, nicotine and seven major nicotine metabolites: 4-Hydroxy-4-(3-pyridyl)butanoic acid, cotinine-N-oxide, nicotine-N-oxide, trans-3'-hydroxycotinine, norcotinine, cotinine and nornicotine. We analyzed robotically prepared samples using high-performance liquid chromatography (HPLC) coupled with triple quadrupole mass spectrometry in positive electrospray ionization mode using scheduled multiple reaction monitoring (sMRM) with a total runtime of 8.5 min. The optimized procedure was able to deliver linear analyte responses over a broad range of concentrations. Responses of urine-based calibrators delivered coefficients of determination (R(2)) of >0.995. Sample preparation recovery was generally higher than 80%. The robotic system was able to prepare four 96-well plate (384 urine samples) per day, and the overall method afforded an accuracy range of 92-115%, and an imprecision of <15.0% on average. The validation results demonstrate that the method is accurate, precise, sensitive, robust, and most significantly labor-saving for sample preparation, making it efficient and practical for routine measurements in large population-scale studies such as the National Health and Nutrition Examination Survey (NHANES) and the Population Assessment of Tobacco and Health (PATH) study. Published by Elsevier B.V.
Orton, Dennis J.; Doucette, Alan A.
2013-01-01
Identification of biomarkers capable of differentiating between pathophysiological states of an individual is a laudable goal in the field of proteomics. Protein biomarker discovery generally employs high throughput sample characterization by mass spectrometry (MS), being capable of identifying and quantifying thousands of proteins per sample. While MS-based technologies have rapidly matured, the identification of truly informative biomarkers remains elusive, with only a handful of clinically applicable tests stemming from proteomic workflows. This underlying lack of progress is attributed in large part to erroneous experimental design, biased sample handling, as well as improper statistical analysis of the resulting data. This review will discuss in detail the importance of experimental design and provide some insight into the overall workflow required for biomarker identification experiments. Proper balance between the degree of biological vs. technical replication is required for confident biomarker identification. PMID:28250400
Determination of Aromatic Ring Number Using Multi-Channel Deep UV Native Fluorescence
NASA Technical Reports Server (NTRS)
Bhartia, R.; McDonald, G. D.; Salas, E.; Conrad, P.
2004-01-01
The in situ detection of organic material on an extraterrestrial surface requires both effective means of searching a relatively large surface area or volume for possible organic carbon, and a more specific means of identifying and quantifying compounds in indicated samples. Fluorescence spectroscopy fits the first requirement well, as it can be carried out rapidly, with minimal or no physical contact with the sample, and with sensitivity unmatched by any other organic analytical technique. Aromatic organic compounds with know fluorescence signatures have been identified in several extraterrestrial samples, including carbonaceous chondrites, interplanetary dust particles, and Martian meteorites. The compound distributions vary among these sources, however, with clear differences in relative abundances by number of aromatic rings and by degree of alkylation. This relative abundance information, therefore, can be used to infer the source of organic material detected on a planetary surface.
CVD diamond detectors for ionizing radiation
NASA Astrophysics Data System (ADS)
Friedl, M.; Adam, W.; Bauer, C.; Berdermann, E.; Bergonzo, P.; Bogani, F.; Borchi, E.; Brambilla, A.; Bruzzi, M.; Colledani, C.; Conway, J.; Dabrowski, W.; Delpierre, P.; Deneuville, A.; Dulinski, W.; van Eijk, B.; Fallou, A.; Fizzotti, F.; Foulon, F.; Gan, K. K.; Gheeraert, E.; Grigoriev, E.; Hallewell, G.; Hall-Wilton, R.; Han, S.; Hartjes, F.; Hrubec, J.; Husson, D.; Kagan, H.; Kania, D.; Kaplon, J.; Karl, C.; Kass, R.; Knöpfle, K. T.; Krammer, M.; Logiudice, A.; Lu, R.; Manfredi, P. F.; Manfredotti, C.; Marshall, R. D.; Meier, D.; Mishina, M.; Oh, A.; Pan, L. S.; Palmieri, V. G.; Pernegger, H.; Pernicka, M.; Peitz, A.; Pirollo, S.; Polesello, P.; Pretzl, K.; Re, V.; Riester, J. L.; Roe, S.; Roff, D.; Rudge, A.; Schnetzer, S.; Sciortino, S.; Speziali, V.; Stelzer, H.; Stone, R.; Tapper, R. J.; Tesarek, R.; Thomson, G. B.; Trawick, M.; Trischuk, W.; Vittone, E.; Walsh, A. M.; Wedenig, R.; Weilhammer, P.; Ziock, H.; Zoeller, M.; RD42 Collaboration
1999-10-01
In future HEP accelerators, such as the LHC (CERN), detectors and electronics in the vertex region of the experiments will suffer from extreme radiation. Thus radiation hardness is required for both detectors and electronics to survive in this harsh environment. CVD diamond, which is investigated by the RD42 Collaboration at CERN, can meet these requirements. Samples of up to 2×4 cm2 have been grown and refined for better charge collection properties, which are measured with a β source or in a testbeam. A large number of diamond samples has been irradiated with hadrons to fluences of up to 5×10 15 cm-2 to study the effects of radiation. Both strip and pixel detectors were prepared in various geometries. Samples with strip metallization have been tested with both slow and fast readout electronics, and the first diamond pixel detector proved fully functional with LHC electronics.
A high-throughput core sampling device for the evaluation of maize stalk composition
2012-01-01
Background A major challenge in the identification and development of superior feedstocks for the production of second generation biofuels is the rapid assessment of biomass composition in a large number of samples. Currently, highly accurate and precise robotic analysis systems are available for the evaluation of biomass composition, on a large number of samples, with a variety of pretreatments. However, the lack of an inexpensive and high-throughput process for large scale sampling of biomass resources is still an important limiting factor. Our goal was to develop a simple mechanical maize stalk core sampling device that can be utilized to collect uniform samples of a dimension compatible with robotic processing and analysis, while allowing the collection of hundreds to thousands of samples per day. Results We have developed a core sampling device (CSD) to collect maize stalk samples compatible with robotic processing and analysis. The CSD facilitates the collection of thousands of uniform tissue cores consistent with high-throughput analysis required for breeding, genetics, and production studies. With a single CSD operated by one person with minimal training, more than 1,000 biomass samples were obtained in an eight-hour period. One of the main advantages of using cores is the high level of homogeneity of the samples obtained and the minimal opportunity for sample contamination. In addition, the samples obtained with the CSD can be placed directly into a bath of ice, dry ice, or liquid nitrogen maintaining the composition of the biomass sample for relatively long periods of time. Conclusions The CSD has been demonstrated to successfully produce homogeneous stalk core samples in a repeatable manner with a throughput substantially superior to the currently available sampling methods. Given the variety of maize developmental stages and the diversity of stalk diameter evaluated, it is expected that the CSD will have utility for other bioenergy crops as well. PMID:22548834
Evaluating noninvasive genetic sampling techniques to estimate large carnivore abundance.
Mumma, Matthew A; Zieminski, Chris; Fuller, Todd K; Mahoney, Shane P; Waits, Lisette P
2015-09-01
Monitoring large carnivores is difficult because of intrinsically low densities and can be dangerous if physical capture is required. Noninvasive genetic sampling (NGS) is a safe and cost-effective alternative to physical capture. We evaluated the utility of two NGS methods (scat detection dogs and hair sampling) to obtain genetic samples for abundance estimation of coyotes, black bears and Canada lynx in three areas of Newfoundland, Canada. We calculated abundance estimates using program capwire, compared sampling costs, and the cost/sample for each method relative to species and study site, and performed simulations to determine the sampling intensity necessary to achieve abundance estimates with coefficients of variation (CV) of <10%. Scat sampling was effective for both coyotes and bears and hair snags effectively sampled bears in two of three study sites. Rub pads were ineffective in sampling coyotes and lynx. The precision of abundance estimates was dependent upon the number of captures/individual. Our simulations suggested that ~3.4 captures/individual will result in a < 10% CV for abundance estimates when populations are small (23-39), but fewer captures/individual may be sufficient for larger populations. We found scat sampling was more cost-effective for sampling multiple species, but suggest that hair sampling may be less expensive at study sites with limited road access for bears. Given the dependence of sampling scheme on species and study site, the optimal sampling scheme is likely to be study-specific warranting pilot studies in most circumstances. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Hamelin, Elizabeth I.; Blake, Thomas A.; Perez, Jonas W.; Crow, Brian S.; Shaner, Rebecca L.; Coleman, Rebecca M.; Johnson, Rudolph C.
2016-05-01
Public health response to large scale chemical emergencies presents logistical challenges for sample collection, transport, and analysis. Diagnostic methods used to identify and determine exposure to chemical warfare agents, toxins, and poisons traditionally involve blood collection by phlebotomists, cold transport of biomedical samples, and costly sample preparation techniques. Use of dried blood spots, which consist of dried blood on an FDA-approved substrate, can increase analyte stability, decrease infection hazard for those handling samples, greatly reduce the cost of shipping/storing samples by removing the need for refrigeration and cold chain transportation, and be self-prepared by potentially exposed individuals using a simple finger prick and blood spot compatible paper. Our laboratory has developed clinical assays to detect human exposures to nerve agents through the analysis of specific protein adducts and metabolites, for which a simple extraction from a dried blood spot is sufficient for removing matrix interferents and attaining sensitivities on par with traditional sampling methods. The use of dried blood spots can bridge the gap between the laboratory and the field allowing for large scale sample collection with minimal impact on hospital resources while maintaining sensitivity, specificity, traceability, and quality requirements for both clinical and forensic applications.
da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C
2009-05-30
Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.
Extreme Quantum Memory Advantage for Rare-Event Sampling
NASA Astrophysics Data System (ADS)
Aghamohammadi, Cina; Loomis, Samuel P.; Mahoney, John R.; Crutchfield, James P.
2018-02-01
We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r , for any large real number r . Then, for a sequence of processes each labeled by an integer size N , we compare how the classical and quantum required memories scale with N . In this setting, since both memories can diverge as N →∞ , the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N →∞ , but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.
Instrumentation for Measurement of Gas Permeability of Polymeric Membranes
NASA Technical Reports Server (NTRS)
Upchurch, Billy T.; Wood, George M.; Brown, Kenneth G.; Burns, Karen S.
1993-01-01
A mass spectrometric 'Dynamic Delta' method for the measurement of gas permeability of polymeric membranes has been developed. The method is universally applicable for measurement of the permeability of any gas through polymeric membrane materials. The usual large sample size of more than 100 square centimeters required for other methods is not necessary for this new method which requires a size less than one square centimeter. The new method should fulfill requirements and find applicability for industrial materials such as food packaging, contact lenses and other commercial materials where gas permeability or permselectivity properties are important.
Cooperative Robotics and the Search for Extraterrestrial Life
NASA Technical Reports Server (NTRS)
Lupisella, M. L.
2000-01-01
If we think tenuous abodes of life may be hiding in remote extraterrestrial environmental niches, and if we want to assess the biological status of a given locale or entire planet before sending humans (perhaps because of contamination concerns or other motivations) then we face the challenge of robotically exploring a large space efficiently and in enough detail to have confidence in our assessment of the biological status of the environment in question. On our present schedule of perhaps two or so missions per opportunity, we will likely need a different exploratory approach than singular stationary landers or singular rover missions or sample return, because there appear to be fundamental limitations in these mission profiles to obtain the many samples we will likely need if we want to have confidence in assessing the biological status of an environment in which life could be hiding in remote environmental niches. Singular rover missions can potentially accommodate sampling over a fairly large area, but are still limited by range and can be a single point of failure. More importantly, such mission profiles have limited payload capabilities which are unlikely to meet the demanding requirements of life-detection. Sample return has the advantage of allowing sophisticated analysis of the sample, but also has the severe limitations associated with only being able to bring back a few samples. This presentation will suggest two cooperative robotic approaches for exploration that have the potential to overcome these difficulties and facilitate efficient and thorough life-detecting exploration of a large space. Given the two premises stated above, it appears at least two fundamental challenges have to be met simultaneously: (1) coverage of a large space and (2) bringing to bear a sophisticated suite of detection and experimental payloads on any specific location in order to address a major challenge in looking for extraterrestrial life: namely, executing a wide variety of detection scenarios and in situ experiments in order to gather the required data for a confident assessment that life has been detected and to, more generally, cover a wide range of extraterrestrial life possibilities. Cooperative robotics lends itself to this kind of problem because cooperation among the combined capabilities of a variety of simple single function agents can give rise to fairly complex task execution such as the search for and detection of extraterrestrial life.
Cooperative Robotics and the Search for Extraterrestrial Life
NASA Technical Reports Server (NTRS)
Lupisella, Mark L.
2000-01-01
If we think tenuous abodes of life may be hiding in remote extraterrestrial environmental niches, and if we want to assess the biological status of a given locale or entire planet before sending humans (perhaps because of contamination concerns or other motivations) then we face the challenge of robotically exploring a large space efficiently and in enough detail to have confidence in our assessment of the biological status of the environment in question. On our present schedule of perhaps two or so missions per opportunity, we will likely need a different exploratory approach than singular stationary landers or singular rover missions or sample return, because there appear to be fundamental limitations in these mission profiles to-obtain the many samples we will likely need if we want to have confidence in assessing the biological status of an environment in which life could be hiding in remote environmental niches. Singular rover missions can potentially accommodate sampling over a fairly large area, but are still limited by range and can be a single point of failure. More importantly, such mission profiles have limited payload capabilities which are unlikely to meet the demanding requirements of life-detection. Sample return has the advantage of allowing sophisticated analysis of the sample, but also has the severe limitations associated with only being able to bring back a few samples. This presentation will suggest two cooperative robotic approaches for exploration that have the potential to overcome these difficulties and facilitate efficient and thorough life-detecting exploration of a large space. Given the two premises state above, it appears at least two fundamental challenges have to be met simultaneously: coverage of a large space and bringing to bear a sophisticated suite of detection and experimental payloads on any specific location in order to address a major challenge in looking for extraterrestrial life: namely, executing a wide variety of detection scenarios and in situ experiments in order to gather the required data for a confident assessment that life has been detected and to, more generally, cover a wide range of extraterrestrial life possibilities. Cooperative robotics ]ends itself to this kind of problem because cooperation among the combined capabilities of a variety of simple single function agents can give rise to fairly complex task execution such as the search for and detection of extraterrestrial life.
Rohde, Alexander; Hammerl, Jens Andre; Appel, Bernd; Dieckmann, Ralf; Al Dahouk, Sascha
2015-01-01
Efficient preparation of food samples, comprising sampling and homogenization, for microbiological testing is an essential, yet largely neglected, component of foodstuff control. Salmonella enterica spiked chicken breasts were used as a surface contamination model whereas salami and meat paste acted as models of inner-matrix contamination. A systematic comparison of different homogenization approaches, namely, stomaching, sonication, and milling by FastPrep-24 or SpeedMill, revealed that for surface contamination a broad range of sample pretreatment steps is applicable and loss of culturability due to the homogenization procedure is marginal. In contrast, for inner-matrix contamination long treatments up to 8 min are required and only FastPrep-24 as a large-volume milling device produced consistently good recovery rates. In addition, sampling of different regions of the spiked sausages showed that pathogens are not necessarily homogenously distributed throughout the entire matrix. Instead, in meat paste the core region contained considerably more pathogens compared to the rim, whereas in the salamis the distribution was more even with an increased concentration within the intermediate region of the sausages. Our results indicate that sampling and homogenization as integral parts of food microbiology and monitoring deserve more attention to further improve food safety.
Simulation of Wind Profile Perturbations for Launch Vehicle Design
NASA Technical Reports Server (NTRS)
Adelfang, S. I.
2004-01-01
Ideally, a statistically representative sample of measured high-resolution wind profiles with wavelengths as small as tens of meters is required in design studies to establish aerodynamic load indicator dispersions and vehicle control system capability. At most potential launch sites, high- resolution wind profiles may not exist. Representative samples of Rawinsonde wind profiles to altitudes of 30 km are more likely to be available from the extensive network of measurement sites established for routine sampling in support of weather observing and forecasting activity. Such a sample, large enough to be statistically representative of relatively large wavelength perturbations, would be inadequate for launch vehicle design assessments because the Rawinsonde system accurately measures wind perturbations with wavelengths no smaller than 2000 m (1000 m altitude increment). The Kennedy Space Center (KSC) Jimsphere wind profiles (150/month and seasonal 2 and 3.5-hr pairs) are the only adequate samples of high resolution profiles approx. 150 to 300 m effective resolution, but over-sampled at 25 m intervals) that have been used extensively for launch vehicle design assessments. Therefore, a simulation process has been developed for enhancement of measured low-resolution Rawinsonde profiles that would be applicable in preliminary launch vehicle design studies at launch sites other than KSC.
Rohde, Alexander; Hammerl, Jens Andre; Appel, Bernd; Dieckmann, Ralf; Al Dahouk, Sascha
2015-01-01
Efficient preparation of food samples, comprising sampling and homogenization, for microbiological testing is an essential, yet largely neglected, component of foodstuff control. Salmonella enterica spiked chicken breasts were used as a surface contamination model whereas salami and meat paste acted as models of inner-matrix contamination. A systematic comparison of different homogenization approaches, namely, stomaching, sonication, and milling by FastPrep-24 or SpeedMill, revealed that for surface contamination a broad range of sample pretreatment steps is applicable and loss of culturability due to the homogenization procedure is marginal. In contrast, for inner-matrix contamination long treatments up to 8 min are required and only FastPrep-24 as a large-volume milling device produced consistently good recovery rates. In addition, sampling of different regions of the spiked sausages showed that pathogens are not necessarily homogenously distributed throughout the entire matrix. Instead, in meat paste the core region contained considerably more pathogens compared to the rim, whereas in the salamis the distribution was more even with an increased concentration within the intermediate region of the sausages. Our results indicate that sampling and homogenization as integral parts of food microbiology and monitoring deserve more attention to further improve food safety. PMID:26539462
NASA Technical Reports Server (NTRS)
Alexander, D. W.
1992-01-01
The Hubble space telescope (HST) solar array was designed to meet specific output power requirements after 2 years in low-Earth orbit, and to remain operational for 5 years. The array, therefore, had to withstand 30,000 thermal cycles between approximately +100 and -100 C. The ability of the array to meet this requirement was evaluated by thermal cycle testing, in vacuum, two 128-cell solar cell modules that exactly duplicated the flight HST solar array design. Also, the ability of the flight array to survive an emergency deployment during the dark (cold) portion of an orbit was evaluated by performing a cold-roll test using one module.
Toward accelerating landslide mapping with interactive machine learning techniques
NASA Astrophysics Data System (ADS)
Stumpf, André; Lachiche, Nicolas; Malet, Jean-Philippe; Kerle, Norman; Puissant, Anne
2013-04-01
Despite important advances in the development of more automated methods for landslide mapping from optical remote sensing images, the elaboration of inventory maps after major triggering events still remains a tedious task. Image classification with expert defined rules typically still requires significant manual labour for the elaboration and adaption of rule sets for each particular case. Machine learning algorithm, on the contrary, have the ability to learn and identify complex image patterns from labelled examples but may require relatively large amounts of training data. In order to reduce the amount of required training data active learning has evolved as key concept to guide the sampling for applications such as document classification, genetics and remote sensing. The general underlying idea of most active learning approaches is to initialize a machine learning model with a small training set, and to subsequently exploit the model state and/or the data structure to iteratively select the most valuable samples that should be labelled by the user and added in the training set. With relatively few queries and labelled samples, an active learning strategy should ideally yield at least the same accuracy than an equivalent classifier trained with many randomly selected samples. Our study was dedicated to the development of an active learning approach for landslide mapping from VHR remote sensing images with special consideration of the spatial distribution of the samples. The developed approach is a region-based query heuristic that enables to guide the user attention towards few compact spatial batches rather than distributed points resulting in time savings of 50% and more compared to standard active learning techniques. The approach was tested with multi-temporal and multi-sensor satellite images capturing recent large scale triggering events in Brazil and China and demonstrated balanced user's and producer's accuracies between 74% and 80%. The assessment also included an experimental evaluation of the uncertainties of manual mappings from multiple experts and demonstrated strong relationships between the uncertainty of the experts and the machine learning model.
Towards a routine application of Top-Down approaches for label-free discovery workflows.
Schmit, Pierre-Olivier; Vialaret, Jerome; Wessels, Hans J C T; van Gool, Alain J; Lehmann, Sylvain; Gabelle, Audrey; Wood, Jason; Bern, Marshall; Paape, Rainer; Suckau, Detlev; Kruppa, Gary; Hirtz, Christophe
2018-03-20
Thanks to proteomics investigations, our vision of the role of different protein isoforms in the pathophysiology of diseases has largely evolved. The idea that protein biomarkers like tau, amyloid peptides, ApoE, cystatin, or neurogranin are represented in body fluids as single species is obviously over-simplified, as most proteins are present in different isoforms and subjected to numerous processing and post-translational modifications. Measuring the intact mass of proteins by MS has the advantage to provide information on the presence and relative amount of the different proteoforms. Such Top-Down approaches typically require a high degree of sample pre-fractionation to allow the MS system to deliver optimal performance in terms of dynamic range, mass accuracy and resolution. In clinical studies, however, the requirements for pre-analytical robustness and sample size large enough for statistical power restrict the routine use of a high degree of sample pre-fractionation. In this study, we have investigated the capacities of current-generation Ultra-High Resolution Q-Tof systems to deal with high complexity intact protein samples and have evaluated the approach on a cohort of patients suffering from neurodegenerative disease. Statistical analysis has shown that several proteoforms can be used to distinguish Alzheimer disease patients from patients suffering from other neurodegenerative disease. Top-down approaches have an extremely high biological relevance, especially when it comes to biomarker discovery, but the necessary pre-fractionation constraints are not easily compatible with the robustness requirements and the size of clinical sample cohorts. We have demonstrated that intact protein profiling studies could be run on UHR-Q-ToF with limited pre-fractionation. The proteoforms that have been identified as candidate biomarkers in the-proof-of concept study are derived from proteins known to play a role in the pathophysiology process of Alzheimer disease. Copyright © 2017 Elsevier B.V. All rights reserved.
Effect of sampling rate and record length on the determination of stability and control derivatives
NASA Technical Reports Server (NTRS)
Brenner, M. J.; Iliff, K. W.; Whitman, R. K.
1978-01-01
Flight data from five aircraft were used to assess the effects of sampling rate and record length reductions on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there were considerable reductions in sampling rate and/or record length. Small amplitude pulse maneuvers showed greater degradation of the derivative maneuvers than large amplitude pulse maneuvers when these reductions were made. Reducing the sampling rate was found to be more desirable than reducing the record length as a method of lessening the total computation time required without greatly degrading the quantity of the estimates.
Fostering Resilience in Beginning Special Education Teachers
ERIC Educational Resources Information Center
Belknap, Bridget M.
2012-01-01
This qualitative study identified perceptions of risk and resilience in four different teaching roles of first-year, secondary special education teachers in three school districts in a large metropolitan area. The study sample consisted of nine women in their first year of teaching who were also completing the requirements of a master's…
The Flame Spectrometric Determination of Calcium in Fruit Juice by Standard Addition.
ERIC Educational Resources Information Center
Strohl, Arthur N.
1985-01-01
Provides procedures to measure the calcium concentration in fruit juice by atomic absorption. Fruit juice is used because: (1) it is an important consumer product; (2) large samples are available; and (3) calcium exists in fruit juice at concentrations that do not require excessive dilution or preconcentration prior to measurement. (JN)
ERIC Educational Resources Information Center
Zablotsky, Benjamin; Colpe, Lisa J.; Pringle, Beverly A.; Kogan, Michael D.; Rice, Catherine; Blumberg, Stephen J.
2017-01-01
Children with autism spectrum disorder (ASD) require substantial support to address the core symptoms of ASD and co-occurring behavioral/developmental conditions. This study explores the early diagnostic experiences of school-aged children with ASD using survey data from a large probability-based national sample. Multivariate linear regressions…
Characterizing dispersal patterns in a threatened seabird with limited genetic structure
Laurie A. Hall; Per J. Palsboll; Steven R. Beissinger; James T. Harvey; Martine Berube; Martin G. Raphael; Kim Nelson; Richard T. Golightly; Laura McFarlane-Tranquilla; Scott H. Newman; M. Zachariah Peery
2009-01-01
Genetic assignment methods provide an appealing approach for characterizing dispersal patterns on ecological time scales, but require sufficient genetic differentiation to accurately identify migrants and a large enough sample size of migrants to, for example, compare dispersal between sexes or age classes. We demonstrate that assignment methods can be rigorously used...
How to Measure the Onset of Babbling Reliably?
ERIC Educational Resources Information Center
Molemans, Inge; van den Berg, Renate; van Severen, Lieve; Gillis, Steven
2012-01-01
Various measures for identifying the onset of babbling have been proposed in the literature, but a formal definition of the exact procedure and a thorough validation of the sample size required for reliably establishing babbling onset is lacking. In this paper the reliability of five commonly used measures is assessed using a large longitudinal…
Student Thoughts and Perceptions on Curriculum Reform
ERIC Educational Resources Information Center
VanderJagt, Douglas D.
2013-01-01
The purpose of this qualitative case study was to examine how students experience and respond to Michigan's increased graduation requirements. The study was conducted in a large, suburban high school that instituted a change to a trimester system in response to the state mandate. A criterion-based sample of 16 students, both college bound and…
A comparison of cosmological models using time delay lenses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Jun-Jie; Wu, Xue-Feng; Melia, Fulvio, E-mail: jjwei@pmo.ac.cn, E-mail: xfwu@pmo.ac.cn, E-mail: fmelia@email.arizona.edu
2014-06-20
The use of time-delay gravitational lenses to examine the cosmological expansion introduces a new standard ruler with which to test theoretical models. The sample suitable for this kind of work now includes 12 lens systems, which have thus far been used solely for optimizing the parameters of ΛCDM. In this paper, we broaden the base of support for this new, important cosmic probe by using these observations to carry out a one-on-one comparison between competing models. The currently available sample indicates a likelihood of ∼70%-80% that the R {sub h} = ct universe is the correct cosmology versus ∼20%-30% formore » the standard model. This possibly interesting result reinforces the need to greatly expand the sample of time-delay lenses, e.g., with the successful implementation of the Dark Energy Survey, the VST ATLAS survey, and the Large Synoptic Survey Telescope. In anticipation of a greatly expanded catalog of time-delay lenses identified with these surveys, we have produced synthetic samples to estimate how large they would have to be in order to rule out either model at a ∼99.7% confidence level. We find that if the real cosmology is ΛCDM, a sample of ∼150 time-delay lenses would be sufficient to rule out R {sub h} = ct at this level of accuracy, while ∼1000 time-delay lenses would be required to rule out ΛCDM if the real universe is instead R {sub h} = ct. This difference in required sample size reflects the greater number of free parameters available to fit the data with ΛCDM.« less
Vitamin D receptor gene and osteoporosis - author`s response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Looney, J.E.; Yoon, Hyun Koo; Fischer, M.
1996-04-01
We appreciate the comments of Dr. Nguyen et al. about our recent study, but we disagree with their suggestion that the lack of an association between low bone density and the BB VDR genotype, which we reported, is an artifact generated by the small sample size. Furthermore, our results are consistent with similar conclusions reached by a number of other investigators, as recently reported by Peacock. Peacock states {open_quotes}Taken as a whole, the results of studies outlined ... indicate that VDR alleles, cannot account for the major part of the heritable component of bone density as indicated by Morrison etmore » al.{close_quotes}. The majority of the 17 studies cited in this editorial could not confirm an association between the VDR genotype and the bone phenotype. Surely one cannot criticize this combined work as representing an artifact because of a too small sample size. We do not dispute the suggestion by Nguyen et al. that large sample sizes are required to analyze small biological effects. This is evident in both Peacock`s summary and in their own bone density studies. We did not design our study with a larger sample size because, based on the work of Morrison et al., we had hypothesized a large biological effect; large sample sizes are only needed for small biological effects. 4 refs.« less
A sampling design framework for monitoring secretive marshbirds
Johnson, D.H.; Gibbs, J.P.; Herzog, M.; Lor, S.; Niemuth, N.D.; Ribic, C.A.; Seamans, M.; Shaffer, T.L.; Shriver, W.G.; Stehman, S.V.; Thompson, W.L.
2009-01-01
A framework for a sampling plan for monitoring marshbird populations in the contiguous 48 states is proposed here. The sampling universe is the breeding habitat (i.e. wetlands) potentially used by marshbirds. Selection protocols would be implemented within each of large geographical strata, such as Bird Conservation Regions. Site selection will be done using a two-stage cluster sample. Primary sampling units (PSUs) would be land areas, such as legal townships, and would be selected by a procedure such as systematic sampling. Secondary sampling units (SSUs) will be wetlands or portions of wetlands in the PSUs. SSUs will be selected by a randomized spatially balanced procedure. For analysis, the use of a variety of methods as a means of increasing confidence in conclusions that may be reached is encouraged. Additional effort will be required to work out details and implement the plan.
NASA Astrophysics Data System (ADS)
Hviding, Raphael E.; Hickox, Ryan C.; Hainline, Kevin N.; Carroll, Christopher M.; DiPompeo, Michael A.; Yan, Wei; Jones, Mackenzie L.
2018-02-01
We present the results of an optical spectroscopic survey of 46 heavily obscured quasar candidates. Objects are selected using their mid-infrared (mid-IR) colours and magnitudes from the Wide-Field Infrared Survey Explorer (WISE) anzd their optical magnitudes from the Sloan Digital Sky Survey. Candidate Active Galactic Nuclei (AGNs) are selected to have mid-IR colours indicative of quasar activity and lie in a region of mid-IR colour space outside previously published X-ray based selection regions. We obtain optical spectra for our sample using the Robert Stobie Spectrograph on the Southern African Large Telescope. 30 objects (65 per cent) have identifiable emission lines, allowing for the determination of spectroscopic redshifts. Other than one object at z ˜ 2.6, candidates have moderate redshifts ranging from z = 0.1 to 0.8 with a median of 0.3. 21 (70 per cent) of our objects with identified redshift (46 per cent of the whole sample) are identified as AGNs through common optical diagnostics. We model the spectral energy distributions of our sample and found that all require a strong AGN component, with an average intrinsic AGN fraction at 8 μm of 0.91. Additionally, the fits require large extinction coefficients with an average E(B - V)AGN = 17.8 (average A(V)AGN = 53.4). By focusing on the area outside traditional mid-IR photometric cuts, we are able to capture and characterize a population of deeply buried quasars that were previously unattainable through X-ray surveys alone.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurley, D.F.; Whitehouse, J.M.
A dedicated low-flow groundwater sample collection system was designed for implementation in a post-closure ACL monitoring program at the Yaworski Lagoon NPL site in Canterbury, Connecticut. The system includes dedicated bladder pumps with intake ports located in the screened interval of the monitoring wells. This sampling technique was implemented in the spring of 1993. The system was designed to simultaneously obtain samples directly from the screened interval of nested wells in three distinct water bearing zones. Sample collection is begun upon stabilization of field parameters. Other than line volume, no prior purging of the well is required. It was foundmore » that dedicated low-flow sampling from the screened interval provides a method of representative sample collection without the bias of suspended solids introduced by traditional techniques of pumping and bailing. Analytical data indicate that measured chemical constituents are representative of groundwater migrating through the screened interval. Upon implementation of the low-flow monitoring system, analytical results exhibited a decrease in concentrations of some organic compounds and metals. The system has also proven to be a cost effective alternative to pumping and bailing which generate large volumes of purge water requiring containment and disposal.« less
GenoCore: A simple and fast algorithm for core subset selection from large genotype datasets.
Jeong, Seongmun; Kim, Jae-Yoon; Jeong, Soon-Chun; Kang, Sung-Taeg; Moon, Jung-Kyung; Kim, Namshin
2017-01-01
Selecting core subsets from plant genotype datasets is important for enhancing cost-effectiveness and to shorten the time required for analyses of genome-wide association studies (GWAS), and genomics-assisted breeding of crop species, etc. Recently, a large number of genetic markers (>100,000 single nucleotide polymorphisms) have been identified from high-density single nucleotide polymorphism (SNP) arrays and next-generation sequencing (NGS) data. However, there is no software available for picking out the efficient and consistent core subset from such a huge dataset. It is necessary to develop software that can extract genetically important samples in a population with coherence. We here present a new program, GenoCore, which can find quickly and efficiently the core subset representing the entire population. We introduce simple measures of coverage and diversity scores, which reflect genotype errors and genetic variations, and can help to select a sample rapidly and accurately for crop genotype dataset. Comparison of our method to other core collection software using example datasets are performed to validate the performance according to genetic distance, diversity, coverage, required system resources, and the number of selected samples. GenoCore selects the smallest, most consistent, and most representative core collection from all samples, using less memory with more efficient scores, and shows greater genetic coverage compared to the other software tested. GenoCore was written in R language, and can be accessed online with an example dataset and test results at https://github.com/lovemun/Genocore.
NASA Astrophysics Data System (ADS)
Valdez, T.; Chao, Y.; Davis, R. E.; Jones, J.
2012-12-01
This talk will describe a new self-powered profiling float that can perform fast sampling over the upper ocean for long durations in support of a mesoscale ocean observing system in the Western North Pacific. The current state-of-the-art profiling floats can provide several hundreds profiles for the upper ocean every ten days. To quantify the role of the upper ocean in modulating the development of Typhoons requires at least an order of magnitude reduction for the sampling interval. With today's profiling float and battery technology, a fast sampling of one day or even a few hours will reduce the typical lifetime of profiling floats from years to months. Interactions between the ocean and typhoons often involves mesoscale eddies and fronts, which require a dense array of floats to reveal the 3-dimensional structure. To measure the mesoscale ocean over a large area like the Western North Pacific therefore requires a new technology that enables fast sampling and long duration at the same time. Harvesting the ocean renewable energy associated with the vertical temperature differentials has the potential to power profiling floats with fast sampling over long durations. Results from the development and deployment of a prototype self-powered profiling float (known as SOLO-TREC) will be presented. With eight hours sampling in the upper 500 meters, the upper ocean temperature and salinity reveal pronounced high frequency variations. Plans to use the SOLO-TREC technology in support of a dense array of fast sampling profiling floats in the Western North Pacific will be discussed.
The production of multiprotein complexes in insect cells using the baculovirus expression system.
Abdulrahman, Wassim; Radu, Laura; Garzoni, Frederic; Kolesnikova, Olga; Gupta, Kapil; Osz-Papai, Judit; Berger, Imre; Poterszman, Arnaud
2015-01-01
The production of a homogeneous protein sample in sufficient quantities is an essential prerequisite not only for structural investigations but represents also a rate-limiting step for many functional studies. In the cell, a large fraction of eukaryotic proteins exists as large multicomponent assemblies with many subunits, which act in concert to catalyze specific activities. Many of these complexes cannot be obtained from endogenous source material, so recombinant expression and reconstitution are then required to overcome this bottleneck. This chapter describes current strategies and protocols for the efficient production of multiprotein complexes in large quantities and of high quality, using the baculovirus/insect cell expression system.
Micrometeoroid and Lunar Secondary Ejecta Flux Measurements: Comparison of Three Acoustic Systems
NASA Technical Reports Server (NTRS)
Corsaro, R. D.; Giovane, F.; Liou, Jer-Chyi; Burtchell, M.; Pisacane, V.; Lagakos, N.; Williams, E.; Stansbery, E.
2010-01-01
This report examines the inherent capability of three large-area acoustic sensor systems and their applicability for micrometeoroids (MM) and lunar secondary ejecta (SE) detection and characterization for future lunar exploration activities. Discussion is limited to instruments that can be fabricated and deployed with low resource requirements. Previously deployed impact detection probes typically have instrumented capture areas less than 0.2 square meters. Since the particle flux decreases rapidly with increased particle size, such small-area sensors rarely encounter particles in the size range above 50 microns, and even their sampling the population above 10 microns is typically limited. Characterizing the sparse dust population in the size range above 50 microns requires a very large-area capture instrument. However it is also important that such an instrument simultaneously measures the population of the smaller particles, so as to provide a complete instantaneous snapshot of the population. For lunar or planetary surface studies, the system constraints are significant. The instrument must be as large as possible to sample the population of the largest MM. This is needed to reliably assess the particle impact risks and to develop cost-effective shielding designs for habitats, astronauts, and critical instrument. The instrument should also have very high sensitivity to measure the flux of small and slow SE particles. is the SE environment is currently poorly characterized, and possess a contamination risk to machinery and personnel involved in exploration. Deployment also requires that the instrument add very little additional mass to the spacecraft. Three acoustic systems are being explored for this application.
Wood, Henry M; Belvedere, Ornella; Conway, Caroline; Daly, Catherine; Chalkley, Rebecca; Bickerdike, Melissa; McKinley, Claire; Egan, Phil; Ross, Lisa; Hayward, Bruce; Morgan, Joanne; Davidson, Leslie; MacLennan, Ken; Ong, Thian K; Papagiannopoulos, Kostas; Cook, Ian; Adams, David J; Taylor, Graham R; Rabbitts, Pamela
2010-08-01
The use of next-generation sequencing technologies to produce genomic copy number data has recently been described. Most approaches, however, reply on optimal starting DNA, and are therefore unsuitable for the analysis of formalin-fixed paraffin-embedded (FFPE) samples, which largely precludes the analysis of many tumour series. We have sought to challenge the limits of this technique with regards to quality and quantity of starting material and the depth of sequencing required. We confirm that the technique can be used to interrogate DNA from cell lines, fresh frozen material and FFPE samples to assess copy number variation. We show that as little as 5 ng of DNA is needed to generate a copy number karyogram, and follow this up with data from a series of FFPE biopsies and surgical samples. We have used various levels of sample multiplexing to demonstrate the adjustable resolution of the methodology, depending on the number of samples and available resources. We also demonstrate reproducibility by use of replicate samples and comparison with microarray-based comparative genomic hybridization (aCGH) and digital PCR. This technique can be valuable in both the analysis of routine diagnostic samples and in examining large repositories of fixed archival material.
The cost of large numbers of hypothesis tests on power, effect size and sample size.
Lazzeroni, L C; Ray, A
2012-01-01
Advances in high-throughput biology and computer science are driving an exponential increase in the number of hypothesis tests in genomics and other scientific disciplines. Studies using current genotyping platforms frequently include a million or more tests. In addition to the monetary cost, this increase imposes a statistical cost owing to the multiple testing corrections needed to avoid large numbers of false-positive results. To safeguard against the resulting loss of power, some have suggested sample sizes on the order of tens of thousands that can be impractical for many diseases or may lower the quality of phenotypic measurements. This study examines the relationship between the number of tests on the one hand and power, detectable effect size or required sample size on the other. We show that once the number of tests is large, power can be maintained at a constant level, with comparatively small increases in the effect size or sample size. For example at the 0.05 significance level, a 13% increase in sample size is needed to maintain 80% power for ten million tests compared with one million tests, whereas a 70% increase in sample size is needed for 10 tests compared with a single test. Relative costs are less when measured by increases in the detectable effect size. We provide an interactive Excel calculator to compute power, effect size or sample size when comparing study designs or genome platforms involving different numbers of hypothesis tests. The results are reassuring in an era of extreme multiple testing.
Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin
2014-01-01
Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.
Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin
2014-01-01
Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block. PMID:25258742
Baseline study of Oxygen 18 and deuterium in the Roswell, New Mexico, groundwater basin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoy, R.N.; Gross, G.W.
The isotopic ratios of deuterium and oxygen 18 were measured in precipitation, surface, and ground water samples from the Roswell Artesian ground water basin in south-central New Mexico. The narrow range of D and 180 indicates mixing effects which are ascribed to one or more of the following factors: long ground water flow paths; large temperature fluctuations affecting which overwhelm the influence of elevation on precipitation; two sources of atmospheric moisture; interaquifer leakage; and recharge from intermittent streams with the flow-length expanding and contracting over large distances. It is concluded that a more precise definition of circulation patterns on themore » basis of stable isotope differences will require a much greater sampling frequency in both space and time.« less
A guide to large-scale RNA sample preparation.
Baronti, Lorenzo; Karlsson, Hampus; Marušič, Maja; Petzold, Katja
2018-05-01
RNA is becoming more important as an increasing number of functions, both regulatory and enzymatic, are being discovered on a daily basis. As the RNA boom has just begun, most techniques are still in development and changes occur frequently. To understand RNA functions, revealing the structure of RNA is of utmost importance, which requires sample preparation. We review the latest methods to produce and purify a variation of RNA molecules for different purposes with the main focus on structural biology and biophysics. We present a guide aimed at identifying the most suitable method for your RNA and your biological question and highlighting the advantages of different methods. Graphical abstract In this review we present different methods for large-scale production and purification of RNAs for structural and biophysical studies.
Intelligent Detection of Structure from Remote Sensing Images Based on Deep Learning Method
NASA Astrophysics Data System (ADS)
Xin, L.
2018-04-01
Utilizing high-resolution remote sensing images for earth observation has become the common method of land use monitoring. It requires great human participation when dealing with traditional image interpretation, which is inefficient and difficult to guarantee the accuracy. At present, the artificial intelligent method such as deep learning has a large number of advantages in the aspect of image recognition. By means of a large amount of remote sensing image samples and deep neural network models, we can rapidly decipher the objects of interest such as buildings, etc. Whether in terms of efficiency or accuracy, deep learning method is more preponderant. This paper explains the research of deep learning method by a great mount of remote sensing image samples and verifies the feasibility of building extraction via experiments.
He, Hongbin; Argiro, Laurent; Dessein, Helia; Chevillard, Christophe
2007-01-01
FTA technology is a novel method designed to simplify the collection, shipment, archiving and purification of nucleic acids from a wide variety of biological sources. The number of punches that can normally be obtained from a single specimen card are often however, insufficient for the testing of the large numbers of loci required to identify genetic factors that control human susceptibility or resistance to multifactorial diseases. In this study, we propose an improved technique to perform large-scale SNP genotyping. We applied a whole genome amplification method to amplify DNA from buccal cell samples stabilized using FTA technology. The results show that using the improved technique it is possible to perform up to 15,000 genotypes from one buccal cell sample. Furthermore, the procedure is simple. We consider this improved technique to be a promising methods for performing large-scale SNP genotyping because the FTA technology simplifies the collection, shipment, archiving and purification of DNA, while whole genome amplification of FTA card bound DNA produces sufficient material for the determination of thousands of SNP genotypes.
NASA Technical Reports Server (NTRS)
Gentry, D. M.; Amador, E. S.; Cable, M. L.; Cantrell, T.; Chaudry, N.; Cullen, T.; Duca, Z.; Kirby, J.; Jacobsen, M.; McCaig, H.;
2017-01-01
Understanding the sensitivity of biomarker assays to the local physicochemical environment, and the underlying spatial distribution of the target biomarkers in 'homogeneous' environments, can increase mission science return. We have conducted four expeditions to Icelandic Mars analogue sites in which an increasingly refined battery of physicochemical measurements and biomarker assays were performed, staggered with scouting of further sites. Completed expeditions took place in 2012 (location scouting and field assay use testing), 2013 (sampling of two major sites with three assays and observational physicochemical measurements), 2015 (repeat sampling of prior sites and one new site, scouting of new sites, three assays and three instruments), and 2016 (preliminary sampling of new sites with analysis of returned samples). Target sites were geologically recent basaltic lava flows, and sample loci were arranged in hierarchically nested grids at 10 cm, 1 m, 10 m, 100 m, and >1 km order scales, subject to field constraints. Assays were intended to represent a diversity of potential biomarker types (cell counting via nucleic acid staining and fluorescence microscopy, ATP quantification via luciferase luminescence, and relative DNA quantification with simple domain-level primers) rather a specific mission science target, and were selected to reduce laboratory overhead, require limited consumables, and allow rapid turnaround. All analytical work was performed in situ or in a field laboratory within a day's travel of the field sites unless otherwise noted. We have demonstrated the feasibility of performing ATP quantification and qPCR analysis in a field-based laboratory with single-day turnaround. The ATP assay was generally robust and reliable and required minimal field equipment and training to produce a large amount of useful data. DNA was successfully extracted from all samples, but the serial-batch nature of qPCR significantly limited the number of primers (hence classifications) and replicates that could be run in a single day. Fluorescence microscopy did not prove feasible under the same constraints, primarily due to the large number of person-hours required to view, analyze, and record results from the images; however, this could be mitigated with higher-quality imaging instruments and appropriate image analysis software.
Beamforming using subspace estimation from a diagonally averaged sample covariance.
Quijano, Jorge E; Zurk, Lisa M
2017-08-01
The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.
Guérin, Nicolas; Dai, Xiongxin
2014-06-17
Polonium-210 ((210)Po) can be rapidly determined in drinking water and urine samples by alpha spectrometry using copper sulfide (CuS) microprecipitation. For drinking water, Po in 10 mL samples was directly coprecipitated onto the filter for alpha counting without any purification. For urine, 10 mL of sample was heated, oxidized with KBrO3 for a short time (∼5 min), and subsequently centrifuged to remove the suspended organic matter. The CuS microprecipitation was then applied to the supernatant. Large batches of samples can be prepared using this technique with high recoveries (∼85%). The figures of merit of the methods were determined, and the developed methods fulfill the requirements for emergency and routine radioassays. The efficiency and reliability of the procedures were confirmed using spiked samples.
Reassessment of Planetary Protection Requirements for Mars Sample Return Missions
NASA Astrophysics Data System (ADS)
Smith, David; Race, Margaret; Farmer, Jack
In 2008, NASA asked the US National Research Council (NRC) to review the findings of the report, Mars Sample Return: Issues and Recommendations (National Academy Press, 1997), and to update its recommendations in the light of both current understanding of Mars's biolog-ical potential and ongoing improvements in biological, chemical, and physical sample-analysis capabilities and technologies. The committee established to address this request was tasked to pay particular attention to five topics. First, the likelihood that living entities may be included in samples returned from Mars. Second, scientific investigations that should be conducted to reduce uncertainty in the assessment of Mars' biological potential. Third, the possibility of large-scale effects on Earth's environment if any returned entity is released into the environment. Fourth, the status of technological measures that could be taken on a mission to prevent the inadvertent release of a returned sample into Earth's biosphere. Fifth, criteria for intentional sample release, taking note of current and anticipated regulatory frameworks. The paper outlines the recommendations contained in the committee's final report, Planetary Protection Requirements for Mars Sample Return Missions (The National Academies Press, 2009), with particular emphasis placed on the scientific, technical and policy changes since 1997 and indications as to how these changes modify the recommendations contained in the 1997 report.
Analysis of munitions constituents in groundwater using a field-portable GC-MS.
Bednar, A J; Russell, A L; Hayes, C A; Jones, W T; Tackett, P; Splichal, D E; Georgian, T; Parker, L V; Kirgan, R A; MacMillan, D K
2012-05-01
The use of munitions constituents (MCs) at military installations can produce soil and groundwater contamination that requires periodic monitoring even after training or manufacturing activities have ceased. Traditional groundwater monitoring methods require large volumes of aqueous samples (e.g., 2-4 L) to be shipped under chain of custody, to fixed laboratories for analysis. The samples must also be packed on ice and shielded from light to minimize degradation that may occur during transport and storage. The laboratory's turn-around time for sample analysis and reporting can be as long as 45 d. This process hinders the reporting of data to customers in a timely manner; yields data that are not necessarily representative of current site conditions owing to the lag time between sample collection and reporting; and incurs significant shipping costs for samples. The current work compares a field portable Gas Chromatograph-Mass Spectrometer (GC-MS) for analysis of MCs on-site with traditional laboratory-based analysis using High Performance Liquid Chromatography with UV absorption detection. The field method provides near real-time (within ~1 h of sampling) concentrations of MCs in groundwater samples. Mass spectrometry provides reliable confirmation of MCs and a means to identify unknown compounds that are potential false positives for methods with UV and other non-selective detectors. Published by Elsevier Ltd.
Lichen elements as pollution indicators: evaluation of methods for large monitoring programmes
Susan Will-Wolf; Sarah Jovan; Michael C. Amacher
2017-01-01
Lichen element content is a reliable indicator for relative air pollution load in research and monitoring programmes requiring both efficiency and representation of many sites. We tested the value of costly rigorous field and handling protocols for sample element analysis using five lichen species. No relaxation of rigour was supported; four relaxed protocols generated...
ERIC Educational Resources Information Center
Lord, Frederic M.; Stocking, Martha
A general Computer program is described that will compute asymptotic standard errors and carry out significance tests for an endless variety of (standard and) nonstandard large-sample statistical problems, without requiring the statistician to derive asymptotic standard error formulas. The program assumes that the observations have a multinormal…
ERIC Educational Resources Information Center
Adams, Troy B.; Colner, Willa
2008-01-01
Few college students meet fruit and vegetable intake recommended requirements, and most receive no information from their institutions about this issue. The avoidable disease burden among students is large, the necessary information infrastructure exists, and "Healthy People 2010" objectives indicate efforts should be taken to increase intake.…
Performance in European Higher Education: A Non-Parametric Production Frontier Approach
ERIC Educational Resources Information Center
Joumady, Othman; Ris, Catherine
2005-01-01
This study examines technical efficiency in European higher education (HE) institutions. To measure efficiency, we consider the capacity of each HE institution, on one hand, to provide competencies to graduates and, on the other hand, to match competencies provided during education to competencies required in the job. We use a large sample of…
NASA Technical Reports Server (NTRS)
Poulton, C. E.
1972-01-01
A multiple sampling technique was developed whereby spacecraft photographs supported by aircraft photographs could be used to quantify plant communities. Large scale (1:600 to 1:2,400) color infrared aerial photographs were required to identify shrub and herbaceous species. These photos were used to successfully estimate a herbaceous standing crop biomass. Microdensitometry was used to discriminate among specific plant communities and individual plant species. Large scale infrared photography was also used to estimate mule deer deaths and population density of northern pocket gophers.
Investigation of orbitofrontal sulcogyral pattern in chronic schizophrenia.
Cropley, Vanessa L; Bartholomeusz, Cali F; Wu, Peter; Wood, Stephen J; Proffitt, Tina; Brewer, Warrick J; Desmond, Patricia M; Velakoulis, Dennis; Pantelis, Christos
2015-11-30
Abnormalities of orbitofrontal cortex (OFC) pattern type distribution have been associated with schizophrenia-spectrum disorders. We investigated OFC pattern type in a large sample of chronic schizophrenia patients and healthy controls. We found an increased frequency of Type II but no difference in Type I or III folding pattern in the schizophrenia group in comparison to controls. Further large studies are required to investigate the diagnostic specificity of altered OFC pattern type and to confirm the distribution of pattern type in the normal population. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Tripathi, Ashish; McNulty, Ian; Shpyrko, Oleg G
2014-01-27
Ptychographic coherent x-ray diffractive imaging is a form of scanning microscopy that does not require optics to image a sample. A series of scanned coherent diffraction patterns recorded from multiple overlapping illuminated regions on the sample are inverted numerically to retrieve its image. The technique recovers the phase lost by detecting the diffraction patterns by using experimentally known constraints, in this case the measured diffraction intensities and the assumed scan positions on the sample. The spatial resolution of the recovered image of the sample is limited by the angular extent over which the diffraction patterns are recorded and how well these constraints are known. Here, we explore how reconstruction quality degrades with uncertainties in the scan positions. We show experimentally that large errors in the assumed scan positions on the sample can be numerically determined and corrected using conjugate gradient descent methods. We also explore in simulations the limits, based on the signal to noise of the diffraction patterns and amount of overlap between adjacent scan positions, of just how large these errors can be and still be rendered tractable by this method.
A Systematic Evaluation of Blood Serum and Plasma Pre-Analytics for Metabolomics Cohort Studies
Jobard, Elodie; Trédan, Olivier; Postoly, Déborah; André, Fabrice; Martin, Anne-Laure; Elena-Herrmann, Bénédicte; Boyault, Sandrine
2016-01-01
The recent thriving development of biobanks and associated high-throughput phenotyping studies requires the elaboration of large-scale approaches for monitoring biological sample quality and compliance with standard protocols. We present a metabolomic investigation of human blood samples that delineates pitfalls and guidelines for the collection, storage and handling procedures for serum and plasma. A series of eight pre-processing technical parameters is systematically investigated along variable ranges commonly encountered across clinical studies. While metabolic fingerprints, as assessed by nuclear magnetic resonance, are not significantly affected by altered centrifugation parameters or delays between sample pre-processing (blood centrifugation) and storage, our metabolomic investigation highlights that both the delay and storage temperature between blood draw and centrifugation are the primary parameters impacting serum and plasma metabolic profiles. Storing the blood drawn at 4 °C is shown to be a reliable routine to confine variability associated with idle time prior to sample pre-processing. Based on their fine sensitivity to pre-analytical parameters and protocol variations, metabolic fingerprints could be exploited as valuable ways to determine compliance with standard procedures and quality assessment of blood samples within large multi-omic clinical and translational cohort studies. PMID:27929400
Efficient computation of the joint sample frequency spectra for multiple populations.
Kamm, John A; Terhorst, Jonathan; Song, Yun S
2017-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.
Efficient computation of the joint sample frequency spectra for multiple populations
Kamm, John A.; Terhorst, Jonathan; Song, Yun S.
2016-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity. PMID:28239248
NASA Astrophysics Data System (ADS)
Banet, Matthias T.; Spencer, Mark F.
2017-09-01
Spatial-heterodyne interferometry is a robust solution for deep-turbulence wavefront sensing. With that said, this paper analyzes the focal-plane array sampling requirements for spatial-heterodyne systems operating in the off-axis pupil plane recording geometry. To assess spatial-heterodyne performance, we use a metric referred to as the field-estimated Strehl ratio. We first develop an analytical description of performance with respect to the number of focal-plane array pixels across the Fried coherence diameter and then verify our results with wave-optics simulations. The analysis indicates that at approximately 5 focal-plane array pixels across the Fried coherence diameter, the field-estimated Strehl ratios begin to exceed 0:9 which is indicative of largely diffraction-limited results.
ClustENM: ENM-Based Sampling of Essential Conformational Space at Full Atomic Resolution
Kurkcuoglu, Zeynep; Bahar, Ivet; Doruker, Pemra
2016-01-01
Accurate sampling of conformational space and, in particular, the transitions between functional substates has been a challenge in molecular dynamic (MD) simulations of large biomolecular systems. We developed an Elastic Network Model (ENM)-based computational method, ClustENM, for sampling large conformational changes of biomolecules with various sizes and oligomerization states. ClustENM is an iterative method that combines ENM with energy minimization and clustering steps. It is an unbiased technique, which requires only an initial structure as input, and no information about the target conformation. To test the performance of ClustENM, we applied it to six biomolecular systems: adenylate kinase (AK), calmodulin, p38 MAP kinase, HIV-1 reverse transcriptase (RT), triosephosphate isomerase (TIM), and the 70S ribosomal complex. The generated ensembles of conformers determined at atomic resolution show good agreement with experimental data (979 structures resolved by X-ray and/or NMR) and encompass the subspaces covered in independent MD simulations for TIM, p38, and RT. ClustENM emerges as a computationally efficient tool for characterizing the conformational space of large systems at atomic detail, in addition to generating a representative ensemble of conformers that can be advantageously used in simulating substrate/ligand-binding events. PMID:27494296
Evaluation of counting methods for oceanic radium-228
NASA Astrophysics Data System (ADS)
Orr, James C.
1988-07-01
Measurement of open ocean 228Ra is difficult, typically requiring at least 200 L of seawater. The burden of collecting and processing these large-volume samples severely limits the widespread use of this promising tracer. To use smaller-volume samples, a more sensitive means of analysis is required. To seek out new and improved counting method(s), conventional 228Ra counting methods have been compared with some promising techniques which are currently used for other radionuclides. Of the conventional methods, α spectrometry possesses the highest efficiency (3-9%) and lowest background (0.0015 cpm), but it suffers from the need for complex chemical processing after sampling and the need to allow about 1 year for adequate ingrowth of 228Th granddaughter. The other two conventional counting methods measure the short-lived 228Ac daughter while it remains supported by 228Ra, thereby avoiding the complex sample processing and the long delay before counting. The first of these, high-resolution γ spectrometry, offers the simplest processing and an efficiency (4.8%) comparable to α spectrometry; yet its high background (0.16 cpm) and substantial equipment cost (˜30,000) limit its widespread use. The second no-wait method, β-γ coincidence spectrometry, also offers comparable efficiency (5.3%), but it possesses both lower background (0.0054 cpm) and lower initial cost (˜12,000). Three new (i.e., untried for 228Ra) techniques all seem to promise about a fivefold increase in efficiency over conventional methods. By employing liquid scintillation methods, both α spectrometry and β-γ coincidence spectrometry can improve their counter efficiency while retaining low background. The third new 228Ra counting method could be adapted from a technique which measures 224Ra by 220Rn emanation. After allowing for ingrowth and then counting for the 224Ra great-granddaughter, 228Ra could be back calculated, thereby yielding a method with high efficiency, where no sample processing is required. The efficiency and background of each of the three new methods have been estimated and are compared with those of the three methods currently employed to measure oceanic 228Ra. From efficiency and background, the relative figure of merit and the detection limit have been determined for each of the six counters. These data suggest that the new counting methods have the potential to measure most 228Ra samples with just 30 L of seawater, to better than 5% precision. Not only would this reduce the time, effort, and expense involved in sample collection, but 228Ra could then be measured on many small-volume samples (20-30 L) previously collected with only 226Ra in mind. By measuring 228Ra quantitatively on such small-volume samples, three analyses (large-volume 228Ra, large-volume 226Ra, and small-volume 226Ra) could be reduced to one, thereby dramatically improving analytical precision.
Revisiting sample size: are big trials the answer?
Lurati Buse, Giovanna A L; Botto, Fernando; Devereaux, P J
2012-07-18
The superiority of the evidence generated in randomized controlled trials over observational data is not only conditional to randomization. Randomized controlled trials require proper design and implementation to provide a reliable effect estimate. Adequate random sequence generation, allocation implementation, analyses based on the intention-to-treat principle, and sufficient power are crucial to the quality of a randomized controlled trial. Power, or the probability of the trial to detect a difference when a real difference between treatments exists, strongly depends on sample size. The quality of orthopaedic randomized controlled trials is frequently threatened by a limited sample size. This paper reviews basic concepts and pitfalls in sample-size estimation and focuses on the importance of large trials in the generation of valid evidence.
Guerra Valero, Yarmarly C; Wallis, Steven C; Lipman, Jeffrey; Stove, Christophe; Roberts, Jason A; Parker, Suzanne L
2018-03-01
Conventional sampling techniques for clinical pharmacokinetic studies often require the removal of large blood volumes from patients. This can result in a physiological or emotional burden, particularly for neonates or pediatric patients. Antibiotic pharmacokinetic studies are typically performed on healthy adults or general ward patients. These may not account for alterations to a patient's pathophysiology and can lead to suboptimal treatment. Microsampling offers an important opportunity for clinical pharmacokinetic studies in vulnerable patient populations, where smaller sample volumes can be collected. This systematic review provides a description of currently available microsampling techniques and an overview of studies reporting the quantitation and validation of antibiotics using microsampling. A comparison of microsampling to conventional sampling in clinical studies is included.
Gaussian vs. Bessel light-sheets: performance analysis in live large sample imaging
NASA Astrophysics Data System (ADS)
Reidt, Sascha L.; Correia, Ricardo B. C.; Donnachie, Mark; Weijer, Cornelis J.; MacDonald, Michael P.
2017-08-01
Lightsheet fluorescence microscopy (LSFM) has rapidly progressed in the past decade from an emerging technology into an established methodology. This progress has largely been driven by its suitability to developmental biology, where it is able to give excellent spatial-temporal resolution over relatively large fields of view with good contrast and low phototoxicity. In many respects it is superseding confocal microscopy. However, it is no magic bullet and still struggles to image deeply in more highly scattering samples. Many solutions to this challenge have been presented, including, Airy and Bessel illumination, 2-photon operation and deconvolution techniques. In this work, we show a comparison between a simple but effective Gaussian beam illumination and Bessel illumination for imaging in chicken embryos. Whilst Bessel illumination is shown to be of benefit when a greater depth of field is required, it is not possible to see any benefits for imaging into the highly scattering tissue of the chick embryo.
Cyber bullying behaviors among middle and high school students.
Mishna, Faye; Cook, Charlene; Gadalla, Tahany; Daciuk, Joanne; Solomon, Steven
2010-07-01
Little research has been conducted that comprehensively examines cyber bullying with a large and diverse sample. The present study examines the prevalence, impact, and differential experience of cyber bullying among a large and diverse sample of middle and high school students (N = 2,186) from a large urban center. The survey examined technology use, cyber bullying behaviors, and the psychosocial impact of bullying and being bullied. About half (49.5%) of students indicated they had been bullied online and 33.7% indicated they had bullied others online. Most bullying was perpetrated by and to friends and participants generally did not tell anyone about the bullying. Participants reported feeling angry, sad, and depressed after being bullied online. Participants bullied others online because it made them feel as though they were funny, popular, and powerful, although many indicated feeling guilty afterward. Greater attention is required to understand and reduce cyber bullying within children's social worlds and with the support of educators and parents.
Sensemaking in a Value Based Context for Large Scale Complex Engineered Systems
NASA Astrophysics Data System (ADS)
Sikkandar Basha, Nazareen
The design and the development of Large-Scale Complex Engineered Systems (LSCES) requires the involvement of multiple teams and numerous levels of the organization and interactions with large numbers of people and interdisciplinary departments. Traditionally, requirements-driven Systems Engineering (SE) is used in the design and development of these LSCES. The requirements are used to capture the preferences of the stakeholder for the LSCES. Due to the complexity of the system, multiple levels of interactions are required to elicit the requirements of the system within the organization. Since LSCES involves people and interactions between the teams and interdisciplinary departments, it should be socio-technical in nature. The elicitation of the requirements of most large-scale system projects are subjected to creep in time and cost due to the uncertainty and ambiguity of requirements during the design and development. In an organization structure, the cost and time overrun can occur at any level and iterate back and forth thus increasing the cost and time. To avoid such creep past researches have shown that rigorous approaches such as value based designing can be used to control it. But before the rigorous approaches can be used, the decision maker should have a proper understanding of requirements creep and the state of the system when the creep occurs. Sensemaking is used to understand the state of system when the creep occurs and provide a guidance to decision maker. This research proposes the use of the Cynefin framework, sensemaking framework which can be used in the design and development of LSCES. It can aide in understanding the system and decision making to minimize the value gap due to requirements creep by eliminating ambiguity which occurs during design and development. A sample hierarchical organization is used to demonstrate the state of the system at the occurrence of requirements creep in terms of cost and time using the Cynefin framework. These trials are continued for different requirements and at different sub-system level. The results obtained show that the Cynefin framework can be used to improve the value of the system and can be used for predictive analysis. The decision makers can use these findings and use rigorous approaches and improve the design of Large Scale Complex Engineered Systems.
Rodeles, Amaia A.; Galicia, David; Miranda, Rafael
2016-01-01
The study of freshwater fish species biodiversity and community composition is essential for understanding river systems, the effects of human activities on rivers, and the changes these animals face. Conducting this type of research requires quantitative information on fish abundance, ideally with long-term series and fish body measurements. This Data Descriptor presents a collection of 12 datasets containing a total of 146,342 occurrence records of 41 freshwater fish species sampled in 233 localities of various Iberian river basins. The datasets also contain 148,749 measurement records (length and weight) for these fish. Data were collected in different sampling campaigns (from 1992 to 2015). Eleven datasets represent large projects conducted over several years, and another combines small sampling campaigns. The Iberian Peninsula contains high fish biodiversity, with numerous endemic species threatened by various menaces, such as water extraction and invasive species. These data may support the development of large biodiversity conservation studies. PMID:27727236
Rodeles, Amaia A; Galicia, David; Miranda, Rafael
2016-10-11
The study of freshwater fish species biodiversity and community composition is essential for understanding river systems, the effects of human activities on rivers, and the changes these animals face. Conducting this type of research requires quantitative information on fish abundance, ideally with long-term series and fish body measurements. This Data Descriptor presents a collection of 12 datasets containing a total of 146,342 occurrence records of 41 freshwater fish species sampled in 233 localities of various Iberian river basins. The datasets also contain 148,749 measurement records (length and weight) for these fish. Data were collected in different sampling campaigns (from 1992 to 2015). Eleven datasets represent large projects conducted over several years, and another combines small sampling campaigns. The Iberian Peninsula contains high fish biodiversity, with numerous endemic species threatened by various menaces, such as water extraction and invasive species. These data may support the development of large biodiversity conservation studies.
Talley, Rachel; Chiang, I-Chin; Covell, Nancy H; Dixon, Lisa
2018-06-01
Improved dissemination is critical to implementation of evidence-based practice in community behavioral healthcare settings. Web-based training modalities are a promising strategy for dissemination of evidence-based practice in community behavioral health settings. Initial and sustained engagement of these modalities in large, multidisciplinary community provider samples is not well understood. This study evaluates comparative engagement and user preferences by provider type in a web-based training platform in a large, multidisciplinary community sample of behavioral health staff in New York State. Workforce make-up among platform registrants was compared to the general NYS behavioral health workforce. Training completion by functional job type was compared to characterize user engagement and preferences. Frequently completed modules were classified by credit and requirement incentives. High initial training engagement across professional role was demonstrated, with significant differences in initial and sustained engagement by professional role. The most frequently completed modules across functional job types contained credit or requirement incentives. The analysis demonstrated that high engagement of a web-based training in a multidisciplinary provider audience can be achieved without tailoring content to specific professional roles. Overlap between frequently completed modules and incentives suggests a role for incentives in promoting engagement of web-based training. These findings further the understanding of strategies to promote large-scale dissemination of evidence-based practice in community behavioral health settings.
Design and test of porous-tungsten mercury vaporizers
NASA Technical Reports Server (NTRS)
Kerslake, W. R.
1972-01-01
Future use of large size Kaufman thrusters and thruster arrays will impose new design requirements for porous plug type vaporizers. Larger flow rate coupled with smaller pores to prevent liquid intrusion will be desired. The results of testing samples of porous tungsten for flow rate, liquid intrusion pressure level, and mechanical strength are presented. Nitrogen gas was used in addition to mercury flow for approximate calibration. Liquid intrusion pressure levels will require that flight thruster systems with long feed lines have some way (a valve) to restrict dynamic line pressures during launch.
Kundu, Anupam; Sabhapandit, Sanjib; Dhar, Abhishek
2011-03-01
We present an algorithm for finding the probabilities of rare events in nonequilibrium processes. The algorithm consists of evolving the system with a modified dynamics for which the required event occurs more frequently. By keeping track of the relative weight of phase-space trajectories generated by the modified and the original dynamics one can obtain the required probabilities. The algorithm is tested on two model systems of steady-state particle and heat transport where we find a huge improvement from direct simulation methods.
NASA Astrophysics Data System (ADS)
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-09-01
In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes ≪200, currently available data are prone to large uncertainties.
[Ethical aspects of biological sample banks].
Cambon-Thomsen, A; Rial-Sebbag, E
2003-02-01
Numerous activities in the domain of epidemiology require the constitution or the use of biological sample banks. Such biobanks raise ethical issues. A number of recommendations are applicable to this field, in France and elsewhere. Major principles applicable to biobanks include the respect of person's autonomy, the respect of human body, the respect of confidentiality. These principles are translated into practices through the following procedures: relevant information to the persons regarding their sample management prior to informed consent, opinion of an independent ethics committee, actual implementation of conditions for protecting samples and data. However, although those principles may appear quite simple and obvious, in the context of a largely international practice of research and given the large variety of biobanks, it is not always obvious for researchers to find their way. The attitudes vary between countries, there are numerous texts for various types of biobanks, the same texts raise different interpretations in different institutions, there are new ethical opinions expressed, and mainly the novelty of questions raised by the uses of samples that are possible today, especially in genetics, and were not foreseeable at the time of sampling make the field difficult in practice. This article reviews the types of biobanks, the relevant ethical issues. It also underlines the still unclear or ambiguous situations using some examples of practical situations.
Generalized analog thresholding for spike acquisition at ultralow sampling rates
He, Bryan D.; Wein, Alex; Varshney, Lav R.; Kusuma, Julius; Richardson, Andrew G.
2015-01-01
Efficient spike acquisition techniques are needed to bridge the divide from creating large multielectrode arrays (MEA) to achieving whole-cortex electrophysiology. In this paper, we introduce generalized analog thresholding (gAT), which achieves millisecond temporal resolution with sampling rates as low as 10 Hz. Consider the torrent of data from a single 1,000-channel MEA, which would generate more than 3 GB/min using standard 30-kHz Nyquist sampling. Recent neural signal processing methods based on compressive sensing still require Nyquist sampling as a first step and use iterative methods to reconstruct spikes. Analog thresholding (AT) remains the best existing alternative, where spike waveforms are passed through an analog comparator and sampled at 1 kHz, with instant spike reconstruction. By generalizing AT, the new method reduces sampling rates another order of magnitude, detects more than one spike per interval, and reconstructs spike width. Unlike compressive sensing, the new method reveals a simple closed-form solution to achieve instant (noniterative) spike reconstruction. The base method is already robust to hardware nonidealities, including realistic quantization error and integration noise. Because it achieves these considerable specifications using hardware-friendly components like integrators and comparators, generalized AT could translate large-scale MEAs into implantable devices for scientific investigation and medical technology. PMID:25904712
Whole metagenome profiles of particulates collected from the International Space Station
Be, Nicholas A.; Avila-Herrera, Aram; Allen, Jonathan E.; ...
2017-07-17
Background The built environment of the International Space Station (ISS) is a highly specialized space in terms of both physical characteristics and habitation requirements. It is unique with respect to conditions of microgravity, exposure to space radiation, and increased carbon dioxide concentrations. Additionally, astronauts inhabit a large proportion of this environment. The microbial composition of ISS particulates has been reported; however, its functional genomics, which are pertinent due to potential impact of its constituents on human health and operational mission success, are not yet characterized. Methods This study examined the whole metagenome of ISS microbes at both species- and gene-levelmore » resolution. Air filter and dust samples from the ISS were analyzed and compared to samples collected in a terrestrial cleanroom environment. Furthermore, metagenome mining was carried out to characterize dominant, virulent, and novel microorganisms. The whole genome sequences of select cultivable strains isolated from these samples were extracted from the metagenome and compared. Results Species-level composition in the ISS was found to be largely dominated by Corynebacterium ihumii GD7, with overall microbial diversity being lower in the ISS relative to the cleanroom samples. When examining detection of microbial genes relevant to human health such as antimicrobial resistance and virulence genes, it was found that a larger number of relevant gene categories were observed in the ISS relative to the cleanroom. Strain-level cross-sample comparisons were made for Corynebacterium, Bacillus, and Aspergillus showing possible distinctions in the dominant strain between samples. Conclusion Species-level analyses demonstrated distinct differences between the ISS and cleanroom samples, indicating that the cleanroom population is not necessarily reflective of space habitation environments. Lastly, the overall population of viable microorganisms and the functional diversity inherent to this unique closed environment are of critical interest with respect to future space habitation. Observations and studies such as these will be important to evaluating the conditions required for long-term health of human occupants in such environments.« less
Whole metagenome profiles of particulates collected from the International Space Station
DOE Office of Scientific and Technical Information (OSTI.GOV)
Be, Nicholas A.; Avila-Herrera, Aram; Allen, Jonathan E.
Background The built environment of the International Space Station (ISS) is a highly specialized space in terms of both physical characteristics and habitation requirements. It is unique with respect to conditions of microgravity, exposure to space radiation, and increased carbon dioxide concentrations. Additionally, astronauts inhabit a large proportion of this environment. The microbial composition of ISS particulates has been reported; however, its functional genomics, which are pertinent due to potential impact of its constituents on human health and operational mission success, are not yet characterized. Methods This study examined the whole metagenome of ISS microbes at both species- and gene-levelmore » resolution. Air filter and dust samples from the ISS were analyzed and compared to samples collected in a terrestrial cleanroom environment. Furthermore, metagenome mining was carried out to characterize dominant, virulent, and novel microorganisms. The whole genome sequences of select cultivable strains isolated from these samples were extracted from the metagenome and compared. Results Species-level composition in the ISS was found to be largely dominated by Corynebacterium ihumii GD7, with overall microbial diversity being lower in the ISS relative to the cleanroom samples. When examining detection of microbial genes relevant to human health such as antimicrobial resistance and virulence genes, it was found that a larger number of relevant gene categories were observed in the ISS relative to the cleanroom. Strain-level cross-sample comparisons were made for Corynebacterium, Bacillus, and Aspergillus showing possible distinctions in the dominant strain between samples. Conclusion Species-level analyses demonstrated distinct differences between the ISS and cleanroom samples, indicating that the cleanroom population is not necessarily reflective of space habitation environments. Lastly, the overall population of viable microorganisms and the functional diversity inherent to this unique closed environment are of critical interest with respect to future space habitation. Observations and studies such as these will be important to evaluating the conditions required for long-term health of human occupants in such environments.« less
Whole metagenome profiles of particulates collected from the International Space Station.
Be, Nicholas A; Avila-Herrera, Aram; Allen, Jonathan E; Singh, Nitin; Checinska Sielaff, Aleksandra; Jaing, Crystal; Venkateswaran, Kasthuri
2017-07-17
The built environment of the International Space Station (ISS) is a highly specialized space in terms of both physical characteristics and habitation requirements. It is unique with respect to conditions of microgravity, exposure to space radiation, and increased carbon dioxide concentrations. Additionally, astronauts inhabit a large proportion of this environment. The microbial composition of ISS particulates has been reported; however, its functional genomics, which are pertinent due to potential impact of its constituents on human health and operational mission success, are not yet characterized. This study examined the whole metagenome of ISS microbes at both species- and gene-level resolution. Air filter and dust samples from the ISS were analyzed and compared to samples collected in a terrestrial cleanroom environment. Furthermore, metagenome mining was carried out to characterize dominant, virulent, and novel microorganisms. The whole genome sequences of select cultivable strains isolated from these samples were extracted from the metagenome and compared. Species-level composition in the ISS was found to be largely dominated by Corynebacterium ihumii GD7, with overall microbial diversity being lower in the ISS relative to the cleanroom samples. When examining detection of microbial genes relevant to human health such as antimicrobial resistance and virulence genes, it was found that a larger number of relevant gene categories were observed in the ISS relative to the cleanroom. Strain-level cross-sample comparisons were made for Corynebacterium, Bacillus, and Aspergillus showing possible distinctions in the dominant strain between samples. Species-level analyses demonstrated distinct differences between the ISS and cleanroom samples, indicating that the cleanroom population is not necessarily reflective of space habitation environments. The overall population of viable microorganisms and the functional diversity inherent to this unique closed environment are of critical interest with respect to future space habitation. Observations and studies such as these will be important to evaluating the conditions required for long-term health of human occupants in such environments.
Klymiuk, Ingeborg; Bambach, Isabella; Patra, Vijaykumar; Trajanoski, Slave; Wolf, Peter
2016-01-01
Microbiome research and improvements in high throughput sequencing technologies revolutionize our current scientific viewpoint. The human associated microbiome is a prominent focus of clinical research. Large cohort studies are often required to investigate the human microbiome composition and its changes in a multitude of human diseases. Reproducible analyses of large cohort samples require standardized protocols in study design, sampling, storage, processing, and data analysis. In particular, the effect of sample storage on actual results is critical for reproducibility. So far, the effect of storage conditions on the results of microbial analysis has been examined for only a few human biological materials (e.g., stool samples). There is a lack of data and information on appropriate storage conditions on other human derived samples, such as skin. Here, we analyzed skin swab samples collected from three different body locations (forearm, V of the chest and back) of eight healthy volunteers. The skin swabs were soaked in sterile buffer and total DNA was isolated after freezing at -80°C for 24 h, 90 or 365 days. Hypervariable regions V1-2 were amplified from total DNA and libraries were sequenced on an Illumina MiSeq desktop sequencer in paired end mode. Data were analyzed using Qiime 1.9.1. Summarizing all body locations per time point, we found no significant differences in alpha diversity and multivariate community analysis among the three time points. Considering body locations separately significant differences in the richness of forearm samples were found between d0 vs. d90 and d90 vs. d365. Significant differences in the relative abundance of major skin genera (Propionibacterium, Streptococcus, Bacteroides, Corynebacterium, and Staphylococcus) were detected in our samples in Bacteroides only among all time points in forearm samples and between d0 vs. d90 and d90 vs. d365 in V of the chest and back samples. Accordingly, significant differences were detected in the ratios of the main phyla Actinobacteria, Firmicutes, and Bacteroidetes: Actinobacteria vs. Bacteroidetes at d0 vs. d90 (p-value = 0.0234), at d0 vs. d365 (p-value = 0.0234) and d90 vs. d365 (p-value = 0.0234) in forearm samples and at d90 vs. d365 in V of the chest (p-value = 0.0234) and back samples (p-value = 0.0234). The ratios of Firmicutes vs. Bacteroidetes showed no significant changes in any of the body locations as well as the ratios of Actinobacteria vs. Firmicutes at any time point. Studies with larger sample sizes are required to verify our results and determine long term storage effects with regard to specific biological questions. PMID:28066342
Contemporary Impact Analysis Methodology for Planetary Sample Return Missions
NASA Technical Reports Server (NTRS)
Perino, Scott V.; Bayandor, Javid; Samareh, Jamshid A.; Armand, Sasan C.
2015-01-01
Development of an Earth entry vehicle and the methodology created to evaluate the vehicle's impact landing response when returning to Earth is reported. NASA's future Mars Sample Return Mission requires a robust vehicle to return Martian samples back to Earth for analysis. The Earth entry vehicle is a proposed solution to this Mars mission requirement. During Earth reentry, the vehicle slows within the atmosphere and then impacts the ground at its terminal velocity. To protect the Martian samples, a spherical energy absorber called an impact sphere is under development. The impact sphere is composed of hybrid composite and crushable foam elements that endure large plastic deformations during impact and cause a highly nonlinear vehicle response. The developed analysis methodology captures a range of complex structural interactions and much of the failure physics that occurs during impact. Numerical models were created and benchmarked against experimental tests conducted at NASA Langley Research Center. The postimpact structural damage assessment showed close correlation between simulation predictions and experimental results. Acceleration, velocity, displacement, damage modes, and failure mechanisms were all effectively captured. These investigations demonstrate that the Earth entry vehicle has great potential in facilitating future sample return missions.
A Sample Handling System for Mars Sample Return - Design and Status
NASA Astrophysics Data System (ADS)
Allouis, E.; Renouf, I.; Deridder, M.; Vrancken, D.; Gelmi, R.; Re, E.
2009-04-01
A mission to return atmosphere and soil samples form the Mars is highly desired by planetary scientists from around the world and space agencies are starting preparation for the launch of a sample return mission in the 2020 timeframe. Such a mission would return approximately 500 grams of atmosphere, rock and soil samples to Earth by 2025. Development of a wide range of new technology will be critical to the successful implementation of such a challenging mission. Technical developments required to realise the mission include guided atmospheric entry, soft landing, sample handling robotics, biological sealing, Mars atmospheric ascent sample rendezvous & capture and Earth return. The European Space Agency has been performing system definition studies along with numerous technology development studies under the framework of the Aurora programme. Within the scope of these activities Astrium has been responsible for defining an overall sample handling architecture in collaboration with European partners (sample acquisition and sample capture, Galileo Avionica; sample containment and automated bio-sealing, Verhaert). Our work has focused on the definition and development of the robotic systems required to move the sample through the transfer chain. This paper presents the Astrium team's high level design for the surface transfer system and the orbiter transfer system. The surface transfer system is envisaged to use two robotic arms of different sizes to allow flexible operations and to enable sample transfer over relatively large distances (~2 to 3 metres): The first to deploy/retract the Drill Assembly used for sample collection, the second for the transfer of the Sample Container (the vessel containing all the collected samples) from the Drill Assembly to the Mars Ascent Vehicle (MAV). The sample transfer actuator also features a complex end-effector for handling the Sample Container. The orbiter transfer system will transfer the Sample Container from the capture mechanism through a bio-sealing system to the Earth Return Capsule (ERC) and has distinctly different requirements from the surface transfer system. The operations required to transfer the samples to the ERC are clearly defined and make use of mechanisms specifically designed for the job rather than robotic arms. Though it is mechanical rather than robotic, the design of the orbiter transfer system is very complex in comparison to most previous missions to fulfil all the scientific and technological requirements. Further mechanisms will be required to lock the samples into the ERC and to close the door at the rear of the ERC through which the samples have been inserted. Having performed this overall definition study, Astrium is now leading the next step of the development of the MSR sample handling: the Mars Surface Sample Transfer and Manipulation project (MSSTM). Organised in two phases, the project will re-evaluate in phase 1 the output of the previous study in the light of new inputs (e.g. addition of a rover) and investigate further the architectures and systems involved in the sample transfer chain while identifying the critical technologies. The second phase of the project will concentrate on the prototyping of a number of these key technologies with the goal of providing an end-to end validation of the surface sample transfer concept.
Xu, Jason; Minin, Vladimir N
2015-07-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.
Xu, Jason; Minin, Vladimir N.
2016-01-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377
Validating a large geophysical data set: Experiences with satellite-derived cloud parameters
NASA Technical Reports Server (NTRS)
Kahn, Ralph; Haskins, Robert D.; Knighton, James E.; Pursch, Andrew; Granger-Gallegos, Stephanie
1992-01-01
We are validating the global cloud parameters derived from the satellite-borne HIRS2 and MSU atmospheric sounding instrument measurements, and are using the analysis of these data as one prototype for studying large geophysical data sets in general. The HIRS2/MSU data set contains a total of 40 physical parameters, filling 25 MB/day; raw HIRS2/MSU data are available for a period exceeding 10 years. Validation involves developing a quantitative sense for the physical meaning of the derived parameters over the range of environmental conditions sampled. This is accomplished by comparing the spatial and temporal distributions of the derived quantities with similar measurements made using other techniques, and with model results. The data handling needed for this work is possible only with the help of a suite of interactive graphical and numerical analysis tools. Level 3 (gridded) data is the common form in which large data sets of this type are distributed for scientific analysis. We find that Level 3 data is inadequate for the data comparisons required for validation. Level 2 data (individual measurements in geophysical units) is needed. A sampling problem arises when individual measurements, which are not uniformly distributed in space or time, are used for the comparisons. Standard 'interpolation' methods involve fitting the measurements for each data set to surfaces, which are then compared. We are experimenting with formal criteria for selecting geographical regions, based upon the spatial frequency and variability of measurements, that allow us to quantify the uncertainty due to sampling. As part of this project, we are also dealing with ways to keep track of constraints placed on the output by assumptions made in the computer code. The need to work with Level 2 data introduces a number of other data handling issues, such as accessing data files across machine types, meeting large data storage requirements, accessing other validated data sets, processing speed and throughput for interactive graphical work, and problems relating to graphical interfaces.
Triaxial testing of Lopez Fault gouge at 150 MPa mean effective stress
Scott, D.R.; Lockner, D.A.; Byerlee, J.D.; Sammis, C.G.
1994-01-01
Triaxial compression experiments were performed on samples of natural granular fault gouge from the Lopez Fault in Southern California. This material consists primarily of quartz and has a self-similar grain size distribution thought to result from natural cataclasis. The experiments were performed at a constant mean effective stress of 150 MPa, to expose the volumetric strains associated with shear failure. The failure strength is parameterized by the coefficient of internal friction ??, based on the Mohr-Coulomb failure criterion. Samples of remoulded Lopez gouge have internal friction ??=0.6??0.02. In experiments where the ends of the sample are constrained to remain axially aligned, suppressing strain localisation, the sample compacts before failure and dilates persistently after failure. In experiments where one end of the sample is free to move laterally, the strain localises to a single oblique fault at around the point of failure; some dilation occurs but does not persist. A comparison of these experiments suggests that dilation is confined to the region of shear localisation in a sample. Overconsolidated samples have slightly larger failure strengths than normally consolidated samples, and smaller axial strains are required to cause failure. A large amount of dilation occurs after failure in heavily overconsolidated samples, suggesting that dilation is occurring throughout the sample. Undisturbed samples of Lopez gouge, cored from the outcrop, have internal friction in the range ??=0.4-0.6; the upper end of this range corresponds to the value established for remoulded Lopez gouge. Some kind of natural heterogeneity within the undisturbed samples is probably responsible for their low, variable strength. In samples of simulated gouge, with a more uniform grain size, active cataclasis during axial loading leads to large amounts of compaction. Larger axial strains are required to cause failure in simulated gouge, but the failure strength is similar to that of natural Lopez gouge. Use of the Mohr-Coulomb failure criterion to interpret the results from this study, and other recent studies on intact rock and granular gouge, leads to values of ?? that depend on the loading configuration and the intact or granular state of the sample. Conceptual models are advanced to account for these descrepancies. The consequences for strain-weakening of natural faults are also discussed. ?? 1994 Birkha??user Verlag.
Phipps, Eric T.; D'Elia, Marta; Edwards, Harold C.; ...
2017-04-18
In this study, quantifying simulation uncertainties is a critical component of rigorous predictive simulation. A key component of this is forward propagation of uncertainties in simulation input data to output quantities of interest. Typical approaches involve repeated sampling of the simulation over the uncertain input data, and can require numerous samples when accurately propagating uncertainties from large numbers of sources. Often simulation processes from sample to sample are similar and much of the data generated from each sample evaluation could be reused. We explore a new method for implementing sampling methods that simultaneously propagates groups of samples together in anmore » embedded fashion, which we call embedded ensemble propagation. We show how this approach takes advantage of properties of modern computer architectures to improve performance by enabling reuse between samples, reducing memory bandwidth requirements, improving memory access patterns, improving opportunities for fine-grained parallelization, and reducing communication costs. We describe a software technique for implementing embedded ensemble propagation based on the use of C++ templates and describe its integration with various scientific computing libraries within Trilinos. We demonstrate improved performance, portability and scalability for the approach applied to the simulation of partial differential equations on a variety of CPU, GPU, and accelerator architectures, including up to 131,072 cores on a Cray XK7 (Titan).« less
NASA Astrophysics Data System (ADS)
Moore, T. S.; Sanderman, J.; Baldock, J.; Plante, A. F.
2016-12-01
National-scale inventories typically include soil organic carbon (SOC) content, but not chemical composition or biogeochemical stability. Australia's Soil Carbon Research Programme (SCaRP) represents a national inventory of SOC content and composition in agricultural systems. The program used physical fractionation followed by 13C nuclear magnetic resonance (NMR) spectroscopy. While these techniques are highly effective, they are typically too expensive and time consuming for use in large-scale SOC monitoring. We seek to understand if analytical thermal analysis is a viable alternative. Coupled differential scanning calorimetry (DSC) and evolved gas analysis (CO2- and H2O-EGA) yields valuable data on SOC composition and stability via ramped combustion. The technique requires little training to use, and does not require fractionation or other sample pre-treatment. We analyzed 300 agricultural samples collected by SCaRP, divided into four fractions: whole soil, coarse particulates (POM), untreated mineral associated (HUM), and hydrofluoric acid (HF)-treated HUM. All samples were analyzed by DSC-EGA, but only the POM and HF-HUM fractions were analyzed by NMR. Multivariate statistical analyses were used to explore natural clustering in SOC composition and stability based on DSC-EGA data. A partial least-squares regression (PLSR) model was used to explore correlations among the NMR and DSC-EGA data. Correlations demonstrated regions of combustion attributable to specific functional groups, which may relate to SOC stability. We are increasingly challenged with developing an efficient technique to assess SOC composition and stability at large spatial and temporal scales. Correlations between NMR and DSC-EGA may demonstrate the viability of using thermal analysis in lieu of more demanding methods in future large-scale surveys, and may provide data that goes beyond chemical composition to better approach quantification of biogeochemical stability.
Precise mapping of the magnetic field in the CMS barrel yoke using cosmic rays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatrchyan, S.; et al.,
2010-03-01
The CMS detector is designed around a large 4 T superconducting solenoid, enclosed in a 12000-tonne steel return yoke. A detailed map of the magnetic field is required for the accurate simulation and reconstruction of physics events in the CMS detector, not only in the inner tracking region inside the solenoid but also in the large and complex structure of the steel yoke, which is instrumented with muon chambers. Using a large sample of cosmic muon events collected by CMS in 2008, the field in the steel of the barrel yoke has been determined with a precision of 3 tomore » 8% depending on the location.« less
Wuhan large pig roundworm virus identified in human feces in Brazil.
Luchs, Adriana; Leal, Elcio; Komninakis, Shirley Vasconcelos; de Pádua Milagres, Flavio Augusto; Brustulin, Rafael; da Aparecida Rodrigues Teles, Maria; Gill, Danielle Elise; Deng, Xutao; Delwart, Eric; Sabino, Ester Cerdeira; da Costa, Antonio Charlys
2018-03-28
We report here the complete genome sequence of a bipartite virus, herein denoted WLPRV/human/BRA/TO-34/201, from a sample collected in 2015 from a two-year-old child in Brazil presenting acute gastroenteritis. The virus has 98-99% identity (segments 2 and 1, respectively) with the Wuhan large pig roundworm virus (unclassified RNA virus) that was recently discovered in the stomachs of pigs from China. This is the first report of a Wuhan large pig roundworm virus detected in human specimens, and the second genome described worldwide. However, the generation of more sequence data and further functional studies are required to fully understand the ecology, epidemiology, and evolution of this new unclassified virus.
Cozzolino, Daniel
2015-03-30
Vibrational spectroscopy encompasses a number of techniques and methods including ultra-violet, visible, Fourier transform infrared or mid infrared, near infrared and Raman spectroscopy. The use and application of spectroscopy generates spectra containing hundreds of variables (absorbances at each wavenumbers or wavelengths), resulting in the production of large data sets representing the chemical and biochemical wine fingerprint. Multivariate data analysis techniques are then required to handle the large amount of data generated in order to interpret the spectra in a meaningful way in order to develop a specific application. This paper focuses on the developments of sample presentation and main sources of error when vibrational spectroscopy methods are applied in wine analysis. Recent and novel applications will be discussed as examples of these developments. © 2014 Society of Chemical Industry.
Secondary analysis of national survey datasets.
Boo, Sunjoo; Froelicher, Erika Sivarajan
2013-06-01
This paper describes the methodological issues associated with secondary analysis of large national survey datasets. Issues about survey sampling, data collection, and non-response and missing data in terms of methodological validity and reliability are discussed. Although reanalyzing large national survey datasets is an expedient and cost-efficient way of producing nursing knowledge, successful investigations require a methodological consideration of the intrinsic limitations of secondary survey analysis. Nursing researchers using existing national survey datasets should understand potential sources of error associated with survey sampling, data collection, and non-response and missing data. Although it is impossible to eliminate all potential errors, researchers using existing national survey datasets must be aware of the possible influence of errors on the results of the analyses. © 2012 The Authors. Japan Journal of Nursing Science © 2012 Japan Academy of Nursing Science.
Video-rate volumetric neuronal imaging using 3D targeted illumination.
Xiao, Sheng; Tseng, Hua-An; Gritton, Howard; Han, Xue; Mertz, Jerome
2018-05-21
Fast volumetric microscopy is required to monitor large-scale neural ensembles with high spatio-temporal resolution. Widefield fluorescence microscopy can image large 2D fields of view at high resolution and speed while remaining simple and costeffective. A focal sweep add-on can further extend the capacity of widefield microscopy by enabling extended-depth-of-field (EDOF) imaging, but suffers from an inability to reject out-of-focus fluorescence background. Here, by using a digital micromirror device to target only in-focus sample features, we perform EDOF imaging with greatly enhanced contrast and signal-to-noise ratio, while reducing the light dosage delivered to the sample. Image quality is further improved by the application of a robust deconvolution algorithm. We demonstrate the advantages of our technique for in vivo calcium imaging in the mouse brain.
Mars Mobile Lander Systems for 2005 and 2007 Launch Opportunities
NASA Technical Reports Server (NTRS)
Sabahi, D.; Graf, J. E.
2000-01-01
A series of Mars missions are proposed for the August 2005 launch opportunity on a medium class Evolved Expendable Launch Vehicle (EELV) with a injected mass capability of 2600 to 2750 kg. Known as the Ranger class, the primary objective of these Mars mission concepts are: (1) Deliver a mobile platform to Mars surface with large payload capability of 150 to 450 kg (depending on launch opportunity of 2005 or 2007); (2) Develop a robust, safe, and reliable workhorse entry, descent, and landing (EDL) capability for landed mass exceeding 750 kg; (3) Provide feed forward capability for the 2007 opportunity and beyond; and (4) Provide an option for a long life telecom relay orbiter. A number of future Mars mission concepts desire landers with large payload capability. Among these concepts are Mars sample return (MSR) which requires 300 to 450 kg landed payload capability to accommodate sampling, sample transfer equipment and a Mars ascent vehicle (MAV). In addition to MSR, large in situ payloads of 150 kg provide a significant step up from the Mars Pathfinder (MPF) and Mars Polar Lander (MPL) class payloads of 20 to 30 kg. This capability enables numerous and physically large science instruments as well as human exploration development payloads. The payload may consist of drills, scoops, rock corers, imagers, spectrometers, and in situ propellant production experiment, and dust and environmental monitoring.
Bacterial identification in real samples by means of micro-Raman spectroscopy
NASA Astrophysics Data System (ADS)
Rösch, Petra; Stöckel, Stephan; Meisel, Susann; Bossecker, Anja; Münchberg, Ute; Kloss, Sandra; Schumacher, Wilm; Popp, Jürgen
2011-07-01
Pathogen detection is essential without time delay especially for severe diseases like sepsis. Here, the survival rate is dependent on a prompt antibiosis. For sepsis three hours after the onset of shock the survival rate of the patient drops below 60 %. Unfortunately, the results from standard diagnosis methods like PCR or microbiology can normally be received after 12 or 36 h, respectively. Therefore diagnosis methods which require less cultivation or even no cultivation at all have to be established for medical diagnosis. Here, Raman spectroscopy, as a vibrational spectroscopic method, is a very sensitive and selective approach and monitors the biochemical composition of the investigated sample. Applying micro-Raman spectroscopy allows for a spatial resolution below 1 μm and is therefore in the size range of bacteria. Raman spectra of bacteria depend on the physiological status. Therefore, the databases require the inclusion of the necessary environmental parameters such as temperature, pH, nutrition, etc. Such large databases therefore require a specialized chemometric approach, since the variation between different strains is small. In this contribution we will demonstrate the capability of Raman spectroscopy to identify pathogens without cultivation even from real environmental or medical samples.
NASA Astrophysics Data System (ADS)
Gunawardhana, M. L. P.; Hopkins, A. M.; Bland-Hawthorn, J.; Brough, S.; Sharp, R.; Loveday, J.; Taylor, E.; Jones, D. H.; Lara-López, M. A.; Bauer, A. E.; Colless, M.; Owers, M.; Baldry, I. K.; López-Sánchez, A. R.; Foster, C.; Bamford, S.; Brown, M. J. I.; Driver, S. P.; Drinkwater, M. J.; Liske, J.; Meyer, M.; Norberg, P.; Robotham, A. S. G.; Ching, J. H. Y.; Cluver, M. E.; Croom, S.; Kelvin, L.; Prescott, M.; Steele, O.; Thomas, D.; Wang, L.
2013-08-01
Measurements of the low-z Hα luminosity function, Φ, have a large dispersion in the local number density of sources (˜0.5-1 Mpc-3 dex-1), and correspondingly in the star formation rate density (SFRD). The possible causes for these discrepancies include limited volume sampling, biases arising from survey sample selection, different methods of correcting for dust obscuration and active galactic nucleus contamination. The Galaxy And Mass Assembly (GAMA) survey and Sloan Digital Sky Survey (SDSS) provide deep spectroscopic observations over a wide sky area enabling detection of a large sample of star-forming galaxies spanning 0.001 < SFRHα (M⊙ yr- 1) < 100 with which to robustly measure the evolution of the SFRD in the low-z Universe. The large number of high-SFR galaxies present in our sample allow an improved measurement of the bright end of the luminosity function, indicating that the decrease in Φ at bright luminosities is best described by a Saunders functional form rather than the traditional Schechter function. This result is consistent with other published luminosity functions in the far-infrared and radio. For GAMA and SDSS, we find the r-band apparent magnitude limit, combined with the subsequent requirement for Hα detection leads to an incompleteness due to missing bright Hα sources with faint r-band magnitudes.
Multi-Reader ROC studies with Split-Plot Designs: A Comparison of Statistical Methods
Obuchowski, Nancy A.; Gallas, Brandon D.; Hillis, Stephen L.
2012-01-01
Rationale and Objectives Multi-reader imaging trials often use a factorial design, where study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of the design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper we compare three methods of analysis for the split-plot design. Materials and Methods Three statistical methods are presented: Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean ANOVA approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power and confidence interval coverage of the three test statistics. Results The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% CIs fall close to the nominal coverage for small and large sample sizes. Conclusions The split-plot MRMC study design can be statistically efficient compared with the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rate, similar power, and nominal CI coverage, are available for this study design. PMID:23122570
Multi-reader ROC studies with split-plot designs: a comparison of statistical methods.
Obuchowski, Nancy A; Gallas, Brandon D; Hillis, Stephen L
2012-12-01
Multireader imaging trials often use a factorial design, in which study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of this design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper, the authors compare three methods of analysis for the split-plot design. Three statistical methods are presented: the Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean analysis-of-variance approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power, and confidence interval coverage of the three test statistics. The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% confidence intervals falls close to the nominal coverage for small and large sample sizes. The split-plot multireader, multicase study design can be statistically efficient compared to the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rates, similar power, and nominal confidence interval coverage, are available for this study design. Copyright © 2012 AUR. All rights reserved.
An Internationally Coordinated Science Management Plan for Samples Returned from Mars
NASA Astrophysics Data System (ADS)
Haltigin, T.; Smith, C. L.
2015-12-01
Mars Sample Return (MSR) remains a high priority of the planetary exploration community. Such an effort will undoubtedly be too large for any individual agency to conduct itself, and thus will require extensive global cooperation. To help prepare for an eventual MSR campaign, the International Mars Exploration Working Group (IMEWG) chartered the international Mars Architecture for the Return of Samples (iMARS) Phase II working group in 2014, consisting of representatives from 17 countries and agencies. The overarching task of the team was to provide recommendations for progressing towards campaign implementation, including a proposed science management plan. Building upon the iMARS Phase I (2008) outcomes, the Phase II team proposed the development of an International MSR Science Institute as part of the campaign governance, centering its deliberations around four themes: Organization: including an organizational structure for the Institute that outlines roles and responsibilities of key members and describes sample return facility requirements; Management: presenting issues surrounding scientific leadership, defining guidelines and assumptions for Institute membership, and proposing a possible funding model; Operations & Data: outlining a science implementation plan that details the preliminary sample examination flow, sample allocation process, and data policies; and Curation: introducing a sample curation plan that comprises sample tracking and routing procedures, sample sterilization considerations, and long-term archiving recommendations. This work presents a summary of the group's activities, findings, and recommendations, highlighting the role of international coordination in managing the returned samples.
Improving the quality of biomarker discovery research: the right samples and enough of them.
Pepe, Margaret S; Li, Christopher I; Feng, Ziding
2015-06-01
Biomarker discovery research has yielded few biomarkers that validate for clinical use. A contributing factor may be poor study designs. The goal in discovery research is to identify a subset of potentially useful markers from a large set of candidates assayed on case and control samples. We recommend the PRoBE design for selecting samples. We propose sample size calculations that require specifying: (i) a definition for biomarker performance; (ii) the proportion of useful markers the study should identify (Discovery Power); and (iii) the tolerable number of useless markers amongst those identified (False Leads Expected, FLE). We apply the methodology to a study of 9,000 candidate biomarkers for risk of colon cancer recurrence where a useful biomarker has positive predictive value ≥ 30%. We find that 40 patients with recurrence and 160 without recurrence suffice to filter out 98% of useless markers (2% FLE) while identifying 95% of useful biomarkers (95% Discovery Power). Alternative methods for sample size calculation required more assumptions. Biomarker discovery research should utilize quality biospecimen repositories and include sample sizes that enable markers meeting prespecified performance characteristics for well-defined clinical applications to be identified. The scientific rigor of discovery research should be improved. ©2015 American Association for Cancer Research.
Fujishima, Motonobu; Kawaguchi, Atsushi; Maikusa, Norihide; Kuwano, Ryozo; Iwatsubo, Takeshi; Matsuda, Hiroshi
2017-01-01
Little is known about the sample sizes required for clinical trials of Alzheimer's disease (AD)-modifying treatments using atrophy measures from serial brain magnetic resonance imaging (MRI) in the Japanese population. The primary objective of the present study was to estimate how large a sample size would be needed for future clinical trials for AD-modifying treatments in Japan using atrophy measures of the brain as a surrogate biomarker. Sample sizes were estimated from the rates of change of the whole brain and hippocampus by the k-means normalized boundary shift integral (KN-BSI) and cognitive measures using the data of 537 Japanese Alzheimer's Neuroimaging Initiative (J-ADNI) participants with a linear mixed-effects model. We also examined the potential use of ApoE status as a trial enrichment strategy. The hippocampal atrophy rate required smaller sample sizes than cognitive measures of AD and mild cognitive impairment (MCI). Inclusion of ApoE status reduced sample sizes for AD and MCI patients in the atrophy measures. These results show the potential use of longitudinal hippocampal atrophy measurement using automated image analysis as a progression biomarker and ApoE status as a trial enrichment strategy in a clinical trial of AD-modifying treatment in Japanese people.
Gopalakrishnan, V; Subramanian, V; Baskaran, R; Venkatraman, B
2015-07-01
Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopalakrishnan, V.; Subramanian, V.; Baskaran, R.
2015-07-15
Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in amore » preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging.« less
ERIC Educational Resources Information Center
Ballieux, Haiko; Tomalski, Przemyslaw; Kushnerneko, Elena; Johnson, Mark H.; Karmiloff-Smith, Annette; Moore, Derek G.
2016-01-01
Recent work suggests that differences in functional brain development are already identifiable in 6- to 9-month-old infants from low socio-economic status (SES) backgrounds. Investigation of early SES-related differences in neuro-cognitive functioning requires the recruitment of large and diverse samples of infants, yet it is often difficult to…
The Effect of Number of Ability Intervals on the Stability of Item Bias Detection.
ERIC Educational Resources Information Center
Loyd, Brenda
The chi-square procedure has been suggested as a viable index of test bias because it provides the best agreement with the three parameter item characteristic curve without the large sample requirement, computer complexity, and cost. This study examines the effect of using different numbers of ability intervals on the reliability of chi-square…
User Requirements in Identifying Desired Works in a Large Library. Final Report.
ERIC Educational Resources Information Center
Lipetz, Ben-Ami
Utilization of the card catalog in the main library (Sterling Memorial Library) of Yale University was studied over a period of more than a year. Traffic flow in the catalog was observed, and was used as the basis for scheduling interviews with a representative sample of catalog users at the moment of catalog use. More than 2000 interviews were…
VizieR Online Data Catalog: REFLEX Galaxy Cluster Survey catalogue (Boehringer+, 2004)
NASA Astrophysics Data System (ADS)
Boehringer, H.; Schuecker, P.; Guzzo, L.; Collins, C. A.; Voges, W.; Cruddace, R. G.; Ortiz-Gil, A.; Chincarini, G.; de Grandi, S.; Edge, A. C.; MacGillivray, H. T.; Neumann, D. M.; Schindler, S.; Shaver, P.
2004-05-01
The following tables provide the catalogue as well as several data files necessary to reproduce the sample preparation. These files are also required for the cosmological modeling of these observations in e.g. the study of the statistics of the large-scale structure of the matter distribution in the Universe and related cosmological tests. (13 data files).
Recent Advances in Mycotoxin Determination for Food Monitoring via Microchip
Man, Yan; Liang, Gang; Li, An; Pan, Ligang
2017-01-01
Mycotoxins are one of the main factors impacting food safety. Mycotoxin contamination has threatened the health of humans and animals. Conventional methods for the detection of mycotoxins are gas chromatography (GC) or liquid chromatography (LC) coupled with mass spectrometry (MS), or enzyme-linked immunosorbent assay (ELISA). However, all these methods are time-consuming, require large-scale instruments and skilled technicians, and consume large amounts of hazardous regents and solvents. Interestingly, a microchip requires less sample consumption and short analysis time, and can realize the integration, miniaturization, and high-throughput detection of the samples. Hence, the application of a microchip for the detection of mycotoxins can make up for the deficiency of the conventional detection methods. This review focuses on the application of a microchip to detect mycotoxins in foods. The toxicities of mycotoxins and the materials of the microchip are firstly summarized in turn. Then the application of a microchip that integrates various kinds of detection methods (optical, electrochemical, photo-electrochemical, and label-free detection) to detect mycotoxins is reviewed in detail. Finally, challenges and future research directions in the development of a microchip to detect mycotoxins are previewed. PMID:29036884
Chance-constrained economic dispatch with renewable energy and storage
Cheng, Jianqiang; Chen, Richard Li-Yang; Najm, Habib N.; ...
2018-04-19
Increased penetration of renewables, along with uncertainties associated with them, have transformed how power systems are operated. High levels of uncertainty means that it is not longer possible to guarantee operational feasibility with certainty, instead constraints are required to be satisfied with high probability. We present a chance-constrained economic dispatch model that efficiently integrates energy storage and high renewable penetration to satisfy renewable portfolio requirements. Specifically, it is required that wind energy contributes at least a prespecified ratio of the total demand and that the scheduled wind energy is dispatchable with high probability. We develop an approximated partial sample averagemore » approximation (PSAA) framework to enable efficient solution of large-scale chanceconstrained economic dispatch problems. Computational experiments on the IEEE-24 bus system show that the proposed PSAA approach is more accurate, closer to the prescribed tolerance, and about 100 times faster than sample average approximation. Improved efficiency of our PSAA approach enables solution of WECC-240 system in minutes.« less
Chance-constrained economic dispatch with renewable energy and storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Jianqiang; Chen, Richard Li-Yang; Najm, Habib N.
Increased penetration of renewables, along with uncertainties associated with them, have transformed how power systems are operated. High levels of uncertainty means that it is not longer possible to guarantee operational feasibility with certainty, instead constraints are required to be satisfied with high probability. We present a chance-constrained economic dispatch model that efficiently integrates energy storage and high renewable penetration to satisfy renewable portfolio requirements. Specifically, it is required that wind energy contributes at least a prespecified ratio of the total demand and that the scheduled wind energy is dispatchable with high probability. We develop an approximated partial sample averagemore » approximation (PSAA) framework to enable efficient solution of large-scale chanceconstrained economic dispatch problems. Computational experiments on the IEEE-24 bus system show that the proposed PSAA approach is more accurate, closer to the prescribed tolerance, and about 100 times faster than sample average approximation. Improved efficiency of our PSAA approach enables solution of WECC-240 system in minutes.« less
NASA Technical Reports Server (NTRS)
Erickson, E. F.; Young, E. T.; Wolf, J.; Asbrock, J. F.; Lum, N.; DeVincenzi, D. (Technical Monitor)
2002-01-01
Arrays of far-infrared photoconductor detectors operate at a few degrees Kelvin and require electronic amplifiers in close proximity. For the electronics, a cryogenic multiplexer is ideal to avoid the large number of wires associated with individual amplifiers for each pixel, and to avoid adverse effects of thermal and radiative heat loads from the circuitry. For low background applications, the 32 channel CRC 696 CMOS device was previously developed for SIRTF, the cryogenic Space Infrared Telescope Facility. For higher background applications, we have developed a similar circuit, featuring several modifications: (a) an AC coupled, capacitive feedback transimpedence unit cell, to minimize input offset effects, thereby enabling low detector biases, (b) selectable feedback capacitors to enable operation over a wide range of backgrounds, and (c) clamp and sample & hold output circuits to improve sampling efficiency, which is a concern at the high readout rates required. We describe the requirements for and design of the new device.
NASA Astrophysics Data System (ADS)
Hütsi, Gert; Gilfanov, Marat; Kolodzig, Alexander; Sunyaev, Rashid
2014-12-01
We investigate the potential of large X-ray-selected AGN samples for detecting baryonic acoustic oscillations (BAO). Though AGN selection in X-ray band is very clean and efficient, it does not provide redshift information, and thus needs to be complemented with an optical follow-up. The main focus of this study is (i) to find the requirements needed for the quality of the optical follow-up and (ii) to formulate the optimal strategy of the X-ray survey, in order to detect the BAO. We demonstrate that redshift accuracy of σ0 = 10-2 at z = 1 and the catastrophic failure rate of ffail ≲ 30% are sufficient for a reliable detection of BAO in future X-ray surveys. Spectroscopic quality redshifts (σ0 = 10-3 and ffail ~ 0) will boost the confidence level of the BAO detection by a factor of ~2. For meaningful detection of BAO, X-ray surveys of moderate depth of Flim ~ few 10-15 erg s-1/cm2 covering sky area from a few hundred to ~ten thousand square degrees are required. The optimal strategy for the BAO detection does not necessarily require full sky coverage. For example, in a 1000 day-long survey by an eROSITA type telescope, an optimal strategy would be to survey a sky area of ~9000 deg2, yielding a ~16σ BAO detection. A similar detection will be achieved by ATHENA+ or WFXT class telescopes in a survey with a duration of 100 days, covering a similar sky area. XMM-Newton can achieve a marginal BAO detection in a 100-day survey covering ~400 deg2. These surveys would demand a moderate-to-high cost in terms the optical follow-ups, requiring determination of redshifts of ~105 (XMM-Newton) to ~3 × 106 objects (eROSITA, ATHENA+, and WFXT) in these sky areas.
NASA Astrophysics Data System (ADS)
Dudak, J.; Zemlicka, J.; Krejci, F.; Karch, J.; Patzelt, M.; Zach, P.; Sykora, V.; Mrzilkova, J.
2016-03-01
X-ray microradiography and microtomography are imaging techniques with increasing applicability in the field of biomedical and preclinical research. Application of hybrid pixel detector Timepix enables to obtain very high contrast of low attenuating materials such as soft biological tissue. However X-ray imaging of ex-vivo soft tissue samples is a difficult task due to its structural instability. Ex-vivo biological tissue is prone to fast drying-out which is connected with undesired changes of sample size and shape producing later on artefacts within the tomographic reconstruction. In this work we present the optimization of our Timepix equipped micro-CT system aiming to maintain soft tissue sample in stable condition. Thanks to the suggested approach higher contrast of tomographic reconstructions can be achieved while also large samples that require detector scanning can be easily measured.
Scene recognition based on integrating active learning with dictionary learning
NASA Astrophysics Data System (ADS)
Wang, Chengxi; Yin, Xueyan; Yang, Lin; Gong, Chengrong; Zheng, Caixia; Yi, Yugen
2018-04-01
Scene recognition is a significant topic in the field of computer vision. Most of the existing scene recognition models require a large amount of labeled training samples to achieve a good performance. However, labeling image manually is a time consuming task and often unrealistic in practice. In order to gain satisfying recognition results when labeled samples are insufficient, this paper proposed a scene recognition algorithm named Integrating Active Learning and Dictionary Leaning (IALDL). IALDL adopts projective dictionary pair learning (DPL) as classifier and introduces active learning mechanism into DPL for improving its performance. When constructing sampling criterion in active learning, IALDL considers both the uncertainty and representativeness as the sampling criteria to effectively select the useful unlabeled samples from a given sample set for expanding the training dataset. Experiment results on three standard databases demonstrate the feasibility and validity of the proposed IALDL.
Maintaining and Enhancing Diversity of Sampled Protein Conformations in Robotics-Inspired Methods.
Abella, Jayvee R; Moll, Mark; Kavraki, Lydia E
2018-01-01
The ability to efficiently sample structurally diverse protein conformations allows one to gain a high-level view of a protein's energy landscape. Algorithms from robot motion planning have been used for conformational sampling, and several of these algorithms promote diversity by keeping track of "coverage" in conformational space based on the local sampling density. However, large proteins present special challenges. In particular, larger systems require running many concurrent instances of these algorithms, but these algorithms can quickly become memory intensive because they typically keep previously sampled conformations in memory to maintain coverage estimates. In addition, robotics-inspired algorithms depend on defining useful perturbation strategies for exploring the conformational space, which is a difficult task for large proteins because such systems are typically more constrained and exhibit complex motions. In this article, we introduce two methodologies for maintaining and enhancing diversity in robotics-inspired conformational sampling. The first method addresses algorithms based on coverage estimates and leverages the use of a low-dimensional projection to define a global coverage grid that maintains coverage across concurrent runs of sampling. The second method is an automatic definition of a perturbation strategy through readily available flexibility information derived from B-factors, secondary structure, and rigidity analysis. Our results show a significant increase in the diversity of the conformations sampled for proteins consisting of up to 500 residues when applied to a specific robotics-inspired algorithm for conformational sampling. The methodologies presented in this article may be vital components for the scalability of robotics-inspired approaches.
Puzzle Imaging: Using Large-Scale Dimensionality Reduction Algorithms for Localization.
Glaser, Joshua I; Zamft, Bradley M; Church, George M; Kording, Konrad P
2015-01-01
Current high-resolution imaging techniques require an intact sample that preserves spatial relationships. We here present a novel approach, "puzzle imaging," that allows imaging a spatially scrambled sample. This technique takes many spatially disordered samples, and then pieces them back together using local properties embedded within the sample. We show that puzzle imaging can efficiently produce high-resolution images using dimensionality reduction algorithms. We demonstrate the theoretical capabilities of puzzle imaging in three biological scenarios, showing that (1) relatively precise 3-dimensional brain imaging is possible; (2) the physical structure of a neural network can often be recovered based only on the neural connectivity matrix; and (3) a chemical map could be reproduced using bacteria with chemosensitive DNA and conjugative transfer. The ability to reconstruct scrambled images promises to enable imaging based on DNA sequencing of homogenized tissue samples.
Sample Selection for Training Cascade Detectors.
Vállez, Noelia; Deniz, Oscar; Bueno, Gloria
2015-01-01
Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.
Measuring larval nematode contamination on cattle pastures: Comparing two herbage sampling methods.
Verschave, S H; Levecke, B; Duchateau, L; Vercruysse, J; Charlier, J
2015-06-15
Assessing levels of pasture larval contamination is frequently used to study the population dynamics of the free-living stages of parasitic nematodes of livestock. Direct quantification of infective larvae (L3) on herbage is the most applied method to measure pasture larval contamination. However, herbage collection remains labour intensive and there is a lack of studies addressing the variation induced by the sampling method and the required sample size. The aim of this study was (1) to compare two different sampling methods in terms of pasture larval count results and time required to sample, (2) to assess the amount of variation in larval counts at the level of sample plot, pasture and season, respectively and (3) to calculate the required sample size to assess pasture larval contamination with a predefined precision using random plots across pasture. Eight young stock pastures of different commercial dairy herds were sampled in three consecutive seasons during the grazing season (spring, summer and autumn). On each pasture, herbage samples were collected through both a double-crossed W-transect with samples taken every 10 steps (method 1) and four random located plots of 0.16 m(2) with collection of all herbage within the plot (method 2). The average (± standard deviation (SD)) pasture larval contamination using sampling methods 1 and 2 was 325 (± 479) and 305 (± 444)L3/kg dry herbage (DH), respectively. Large discrepancies in pasture larval counts of the same pasture and season were often seen between methods, but no significant difference (P = 0.38) in larval counts between methods was found. Less time was required to collect samples with method 2. This difference in collection time between methods was most pronounced for pastures with a surface area larger than 1 ha. The variation in pasture larval counts from samples generated by random plot sampling was mainly due to the repeated measurements on the same pasture in the same season (residual variance component = 6.2), rather than due to pasture (variance component = 0.55) or season (variance component = 0.15). Using the observed distribution of L3, the required sample size (i.e. number of plots per pasture) for sampling a pasture through random plots with a particular precision was simulated. A higher relative precision was acquired when estimating PLC on pastures with a high larval contamination and a low level of aggregation compared to pastures with a low larval contamination when the same sample size was applied. In the future, herbage sampling through random plots across pasture (method 2) seems a promising method to develop further as no significant difference in counts between the methods was found and this method was less time consuming. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Vicuña, Cristián Molina; Höweler, Christoph
2017-12-01
The use of AE in machine failure diagnosis has increased over the last years. Most AE-based failure diagnosis strategies use digital signal processing and thus require the sampling of AE signals. High sampling rates are required for this purpose (e.g. 2 MHz or higher), leading to streams of large amounts of data. This situation is aggravated if fine resolution and/or multiple sensors are required. These facts combine to produce bulky data, typically in the range of GBytes, for which sufficient storage space and efficient signal processing algorithms are required. This situation probably explains why, in practice, AE-based methods consist mostly in the calculation of scalar quantities such as RMS and Kurtosis, and the analysis of their evolution in time. While the scalar-based approach offers the advantage of maximum data reduction; it has the disadvantage that most part of the information contained in the raw AE signal is lost unrecoverably. This work presents a method offering large data reduction, while keeping the most important information conveyed by the raw AE signal, useful for failure detection and diagnosis. The proposed method consist in the construction of a synthetic, unevenly sampled signal which envelopes the AE bursts present on the raw AE signal in a triangular shape. The constructed signal - which we call TriSignal - also permits the estimation of most scalar quantities typically used for failure detection. But more importantly, it contains the information of the time of occurrence of the bursts, which is key for failure diagnosis. Lomb-Scargle normalized periodogram is used to construct the TriSignal spectrum, which reveals the frequency content of the TriSignal and provides the same information as the classic AE envelope. The paper includes application examples in planetary gearbox and low-speed rolling element bearing.
Using lod scores to detect sex differences in male-female recombination fractions.
Feenstra, B; Greenberg, D A; Hodge, S E
2004-01-01
Human recombination fraction (RF) can differ between males and females, but investigators do not always know which disease genes are located in genomic areas of large RF sex differences. Knowledge of RF sex differences contributes to our understanding of basic biology and can increase the power of a linkage study, improve gene localization, and provide clues to possible imprinting. One way to detect these differences is to use lod scores. In this study we focused on detecting RF sex differences and answered the following questions, in both phase-known and phase-unknown matings: (1) How large a sample size is needed to detect a RF sex difference? (2) What are "optimal" proportions of paternally vs. maternally informative matings? (3) Does ascertaining nonoptimal proportions of paternally or maternally informative matings lead to ascertainment bias? Our results were as follows: (1) We calculated expected lod scores (ELODs) under two different conditions: "unconstrained," allowing sex-specific RF parameters (theta(female), theta(male)); and "constrained," requiring theta(female) = theta(male). We then examined the DeltaELOD (identical with difference between maximized constrained and unconstrained ELODs) and calculated minimum sample sizes required to achieve statistically significant DeltaELODs. For large RF sex differences, samples as small as 10 to 20 fully informative matings can achieve statistical significance. We give general sample size guidelines for detecting RF differences in informative phase-known and phase-unknown matings. (2) We defined p as the proportion of paternally informative matings in the dataset; and the optimal proportion p(circ) as that value of p that maximizes DeltaELOD. We determined that, surprisingly, p(circ) does not necessarily equal (1/2), although it does fall between approximately 0.4 and 0.6 in most situations. (3) We showed that if p in a sample deviates from its optimal value, no bias is introduced (asymptotically) to the maximum likelihood estimates of theta(female) and theta(male), even though ELOD is reduced (see point 2). This fact is important because often investigators cannot control the proportions of paternally and maternally informative families. In conclusion, it is possible to reliably detect sex differences in recombination fraction. Copyright 2004 S. Karger AG, Basel
NASA Astrophysics Data System (ADS)
Sheikholeslami, R.; Hosseini, N.; Razavi, S.
2016-12-01
Modern earth and environmental models are usually characterized by a large parameter space and high computational cost. These two features prevent effective implementation of sampling-based analysis such as sensitivity and uncertainty analysis, which require running these computationally expensive models several times to adequately explore the parameter/problem space. Therefore, developing efficient sampling techniques that scale with the size of the problem, computational budget, and users' needs is essential. In this presentation, we propose an efficient sequential sampling strategy, called Progressive Latin Hypercube Sampling (PLHS), which provides an increasingly improved coverage of the parameter space, while satisfying pre-defined requirements. The original Latin hypercube sampling (LHS) approach generates the entire sample set in one stage; on the contrary, PLHS generates a series of smaller sub-sets (also called `slices') while: (1) each sub-set is Latin hypercube and achieves maximum stratification in any one dimensional projection; (2) the progressive addition of sub-sets remains Latin hypercube; and thus (3) the entire sample set is Latin hypercube. Therefore, it has the capability to preserve the intended sampling properties throughout the sampling procedure. PLHS is deemed advantageous over the existing methods, particularly because it nearly avoids over- or under-sampling. Through different case studies, we show that PHLS has multiple advantages over the one-stage sampling approaches, including improved convergence and stability of the analysis results with fewer model runs. In addition, PLHS can help to minimize the total simulation time by only running the simulations necessary to achieve the desired level of quality (e.g., accuracy, and convergence rate).
Large-scale quantum photonic circuits in silicon
NASA Astrophysics Data System (ADS)
Harris, Nicholas C.; Bunandar, Darius; Pant, Mihir; Steinbrecher, Greg R.; Mower, Jacob; Prabhu, Mihika; Baehr-Jones, Tom; Hochberg, Michael; Englund, Dirk
2016-08-01
Quantum information science offers inherently more powerful methods for communication, computation, and precision measurement that take advantage of quantum superposition and entanglement. In recent years, theoretical and experimental advances in quantum computing and simulation with photons have spurred great interest in developing large photonic entangled states that challenge today's classical computers. As experiments have increased in complexity, there has been an increasing need to transition bulk optics experiments to integrated photonics platforms to control more spatial modes with higher fidelity and phase stability. The silicon-on-insulator (SOI) nanophotonics platform offers new possibilities for quantum optics, including the integration of bright, nonclassical light sources, based on the large third-order nonlinearity (χ(3)) of silicon, alongside quantum state manipulation circuits with thousands of optical elements, all on a single phase-stable chip. How large do these photonic systems need to be? Recent theoretical work on Boson Sampling suggests that even the problem of sampling from e30 identical photons, having passed through an interferometer of hundreds of modes, becomes challenging for classical computers. While experiments of this size are still challenging, the SOI platform has the required component density to enable low-loss and programmable interferometers for manipulating hundreds of spatial modes. Here, we discuss the SOI nanophotonics platform for quantum photonic circuits with hundreds-to-thousands of optical elements and the associated challenges. We compare SOI to competing technologies in terms of requirements for quantum optical systems. We review recent results on large-scale quantum state evolution circuits and strategies for realizing high-fidelity heralded gates with imperfect, practical systems. Next, we review recent results on silicon photonics-based photon-pair sources and device architectures, and we discuss a path towards large-scale source integration. Finally, we review monolithic integration strategies for single-photon detectors and their essential role in on-chip feed forward operations.
Chromium isotopic anomalies in the Allende meteorite
NASA Technical Reports Server (NTRS)
Papanastassiou, D. A.
1986-01-01
Abundances of the chromium isotopes in terrestrial and bulk meteorite samples are identical to 0.01 percent. However, Ca-Al-rich inclusions from the Allende meteorite show endemic isotopic anomalies in chromium which require at least three nucleosynthetic components. Large anomalies at Cr-54 in a special class of inclusions are correlated with large anomalies at Ca-48 and Ti-50 and provide strong support for a component reflecting neutron-rich nucleosynthesis at nuclear statistical equilibrium. This correlation suggests that materials from very near the core of an exploding massive star may be injected into the interstellar medium.
Advantages and challenges in automated apatite fission track counting
NASA Astrophysics Data System (ADS)
Enkelmann, E.; Ehlers, T. A.
2012-04-01
Fission track thermochronometer data are often a core element of modern tectonic and denudation studies. Soon after the development of the fission track methods interest emerged for the developed an automated counting procedure to replace the time consuming labor of counting fission tracks under the microscope. Automated track counting became feasible in recent years with increasing improvements in computer software and hardware. One such example used in this study is the commercial automated fission track counting procedure from Autoscan Systems Pty that has been highlighted through several venues. We conducted experiments that are designed to reliably and consistently test the ability of this fully automated counting system to recognize fission tracks in apatite and a muscovite external detector. Fission tracks were analyzed in samples with a step-wise increase in sample complexity. The first set of experiments used a large (mm-size) slice of Durango apatite cut parallel to the prism plane. Second, samples with 80-200 μm large apatite grains of Fish Canyon Tuff were analyzed. This second sample set is characterized by complexities often found in apatites in different rock types. In addition to the automated counting procedure, the same samples were also analyzed using conventional counting procedures. We found for all samples that the fully automated fission track counting procedure using the Autoscan System yields a larger scatter in the fission track densities measured compared to conventional (manual) track counting. This scatter typically resulted from the false identification of tracks due surface and mineralogical defects, regardless of the image filtering procedure used. Large differences between track densities analyzed with the automated counting persisted between different grains analyzed in one sample as well as between different samples. As a result of these differences a manual correction of the fully automated fission track counts is necessary for each individual surface area and grain counted. This manual correction procedure significantly increases (up to four times) the time required to analyze a sample with the automated counting procedure compared to the conventional approach.
The use of Landsat for monitoring water parameters in the coastal zone
NASA Technical Reports Server (NTRS)
Bowker, D. E.; Witte, W. G.
1977-01-01
Landsats 1 and 2 have been successful in detecting and quantifying suspended sediment and several other important parameters in the coastal zone, including chlorophyll, particles, alpha (light transmission), tidal conditions, acid and sewage dumps, and in some instances oil spills. When chlorophyll a is present in detectable quantities, however, it is shown to interfere with the measurement of sediment. The Landsat banding problem impairs the instrument resolution and places a requirement on the sampling program to collect surface data from a sufficiently large area. A sampling method which satisfies this condition is demonstrated.
Orbital Circularization of Hot and Cool Kepler Eclipsing Binaries
NASA Astrophysics Data System (ADS)
Van Eylen, Vincent; Winn, Joshua N.; Albrecht, Simon
2016-06-01
The rate of tidal circularization is predicted to be faster for relatively cool stars with convective outer layers, compared to hotter stars with radiative outer layers. Observing this effect is challenging because it requires large and well-characterized samples that include both hot and cool stars. Here we seek evidence of the predicted dependence of circularization upon stellar type, using a sample of 945 eclipsing binaries observed by Kepler. This sample complements earlier studies of this effect, which employed smaller samples of better-characterized stars. For each Kepler binary we measure e cos ω based on the relative timing of the primary and secondary eclipses. We examine the distribution of e cos ω as a function of period for binaries composed of hot stars, cool stars, and mixtures of the two types. At the shortest periods, hot-hot binaries are most likely to be eccentric; for periods shorter than four days, significant eccentricities occur frequently for hot-hot binaries, but not for hot-cool or cool-cool binaries. This is in qualitative agreement with theoretical expectations based on the slower dissipation rates of hot stars. However, the interpretation of our results is complicated by the largely unknown ages and evolutionary states of the stars in our sample.
ORBITAL CIRCULARIZATION OF HOT AND COOL KEPLER ECLIPSING BINARIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eylen, Vincent Van; Albrecht, Simon; Winn, Joshua N., E-mail: vincent@phys.au.dk
The rate of tidal circularization is predicted to be faster for relatively cool stars with convective outer layers, compared to hotter stars with radiative outer layers. Observing this effect is challenging because it requires large and well-characterized samples that include both hot and cool stars. Here we seek evidence of the predicted dependence of circularization upon stellar type, using a sample of 945 eclipsing binaries observed by Kepler . This sample complements earlier studies of this effect, which employed smaller samples of better-characterized stars. For each Kepler binary we measure e cos ω based on the relative timing of themore » primary and secondary eclipses. We examine the distribution of e cos ω as a function of period for binaries composed of hot stars, cool stars, and mixtures of the two types. At the shortest periods, hot–hot binaries are most likely to be eccentric; for periods shorter than four days, significant eccentricities occur frequently for hot–hot binaries, but not for hot–cool or cool–cool binaries. This is in qualitative agreement with theoretical expectations based on the slower dissipation rates of hot stars. However, the interpretation of our results is complicated by the largely unknown ages and evolutionary states of the stars in our sample.« less
Singh, Kumar Saurabh; Thual, Dominique; Spurio, Roberto; Cannata, Nicola
2015-01-01
One of the most crucial characteristics of day-to-day laboratory information management is the collection, storage and retrieval of information about research subjects and environmental or biomedical samples. An efficient link between sample data and experimental results is absolutely important for the successful outcome of a collaborative project. Currently available software solutions are largely limited to large scale, expensive commercial Laboratory Information Management Systems (LIMS). Acquiring such LIMS indeed can bring laboratory information management to a higher level, but most of the times this requires a sufficient investment of money, time and technical efforts. There is a clear need for a light weighted open source system which can easily be managed on local servers and handled by individual researchers. Here we present a software named SaDA for storing, retrieving and analyzing data originated from microorganism monitoring experiments. SaDA is fully integrated in the management of environmental samples, oligonucleotide sequences, microarray data and the subsequent downstream analysis procedures. It is simple and generic software, and can be extended and customized for various environmental and biomedical studies. PMID:26047146
Singh, Kumar Saurabh; Thual, Dominique; Spurio, Roberto; Cannata, Nicola
2015-06-03
One of the most crucial characteristics of day-to-day laboratory information management is the collection, storage and retrieval of information about research subjects and environmental or biomedical samples. An efficient link between sample data and experimental results is absolutely important for the successful outcome of a collaborative project. Currently available software solutions are largely limited to large scale, expensive commercial Laboratory Information Management Systems (LIMS). Acquiring such LIMS indeed can bring laboratory information management to a higher level, but most of the times this requires a sufficient investment of money, time and technical efforts. There is a clear need for a light weighted open source system which can easily be managed on local servers and handled by individual researchers. Here we present a software named SaDA for storing, retrieving and analyzing data originated from microorganism monitoring experiments. SaDA is fully integrated in the management of environmental samples, oligonucleotide sequences, microarray data and the subsequent downstream analysis procedures. It is simple and generic software, and can be extended and customized for various environmental and biomedical studies.
Graham, Jay P; VanDerslice, James
2007-06-01
Many communities along the US-Mexico border remain without infrastructure for water and sewage. Residents in these communities often collect and store their water in open 55-gallon drums. This study evaluated changes in drinking water quality resulting from an intervention that provided large closed water storage tanks (2,500-gallons) to individual homes lacking a piped water supply. After the intervention, many of the households did not change the source of their drinking water to the large storage tanks. Therefore, water quality results were first compared based on the source of the household's drinking water: store or vending machine, large tank, or collected from a public supply and transported by the household. Of the households that used the large storage tank as their drinking water supply, drinking water quality was generally of poorer quality. Fifty-four percent of samples collected prior to intervention had detectable levels of total coliforms, while 82% of samples were positive nine months after the intervention (p < 0.05). Exploratory analyses were also carried out to measure water quality at different points between collection by water delivery trucks and delivery to the household's large storage tank. Thirty percent of the samples taken immediately after water was delivered to the home had high total coliforms (> 10 CFU/100 ml). Mean free chlorine levels dropped from 0.43 mg/l, where the trucks filled their tanks, to 0.20 mg/l inside the household's tank immediately after delivery. Results of this study have implications for interventions that focus on safe water treatment and storage in the home, and for guidelines regarding the level of free chlorine required in water delivered by water delivery trucks.
Lelental, Natalia; Brandner, Sebastian; Kofanova, Olga; Blennow, Kaj; Zetterberg, Henrik; Andreasson, Ulf; Engelborghs, Sebastiaan; Mroczko, Barbara; Gabryelewicz, Tomasz; Teunissen, Charlotte; Mollenhauer, Brit; Parnetti, Lucilla; Chiasserini, Davide; Molinuevo, Jose Luis; Perret-Liaudet, Armand; Verbeek, Marcel M; Andreasen, Niels; Brosseron, Frederic; Bahl, Justyna M C; Herukka, Sanna-Kaisa; Hausner, Lucrezia; Frölich, Lutz; Labonte, Anne; Poirier, Judes; Miller, Anne-Marie; Zilka, Norbert; Kovacech, Branislav; Urbani, Andrea; Suardi, Silvia; Oliveira, Catarina; Baldeiras, Ines; Dubois, Bruno; Rot, Uros; Lehmann, Sylvain; Skinningsrud, Anders; Betsou, Fay; Wiltfang, Jens; Gkatzima, Olymbia; Winblad, Bengt; Buchfelder, Michael; Kornhuber, Johannes; Lewczuk, Piotr
2016-03-01
Assay-vendor independent quality control (QC) samples for neurochemical dementia diagnostics (NDD) biomarkers are so far commercially unavailable. This requires that NDD laboratories prepare their own QC samples, for example by pooling leftover cerebrospinal fluid (CSF) samples. To prepare and test alternative matrices for QC samples that could facilitate intra- and inter-laboratory QC of the NDD biomarkers. Three matrices were validated in this study: (A) human pooled CSF, (B) Aβ peptides spiked into human prediluted plasma, and (C) Aβ peptides spiked into solution of bovine serum albumin in phosphate-buffered saline. All matrices were tested also after supplementation with an antibacterial agent (sodium azide). We analyzed short- and long-term stability of the biomarkers with ELISA and chemiluminescence (Fujirebio Europe, MSD, IBL International), and performed an inter-laboratory variability study. NDD biomarkers turned out to be stable in almost all samples stored at the tested conditions for up to 14 days as well as in samples stored deep-frozen (at - 80°C) for up to one year. Sodium azide did not influence biomarker stability. Inter-center variability of the samples sent at room temperature (pooled CSF, freeze-dried CSF, and four artificial matrices) was comparable to the results obtained on deep-frozen samples in other large-scale projects. Our results suggest that it is possible to replace self-made, CSF-based QC samples with large-scale volumes of QC materials prepared with artificial peptides and matrices. This would greatly facilitate intra- and inter-laboratory QC schedules for NDD measurements.
Impacts of savanna trees on forage quality for a large African herbivore
De Kroon, Hans; Prins, Herbert H. T.
2008-01-01
Recently, cover of large trees in African savannas has rapidly declined due to elephant pressure, frequent fires and charcoal production. The reduction in large trees could have consequences for large herbivores through a change in forage quality. In Tarangire National Park, in Northern Tanzania, we studied the impact of large savanna trees on forage quality for wildebeest by collecting samples of dominant grass species in open grassland and under and around large Acacia tortilis trees. Grasses growing under trees had a much higher forage quality than grasses from the open field indicated by a more favourable leaf/stem ratio and higher protein and lower fibre concentrations. Analysing the grass leaf data with a linear programming model indicated that large savanna trees could be essential for the survival of wildebeest, the dominant herbivore in Tarangire. Due to the high fibre content and low nutrient and protein concentrations of grasses from the open field, maximum fibre intake is reached before nutrient requirements are satisfied. All requirements can only be satisfied by combining forage from open grassland with either forage from under or around tree canopies. Forage quality was also higher around dead trees than in the open field. So forage quality does not reduce immediately after trees die which explains why negative effects of reduced tree numbers probably go initially unnoticed. In conclusion our results suggest that continued destruction of large trees could affect future numbers of large herbivores in African savannas and better protection of large trees is probably necessary to sustain high animal densities in these ecosystems. PMID:18309522
The Impact of Accelerating Faster than Exponential Population Growth on Genetic Variation
Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian
2014-01-01
Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models’ effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times. PMID:24381333
The impact of accelerating faster than exponential population growth on genetic variation.
Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian
2014-03-01
Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models' effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times.
Ferrari, Ulisse
2016-08-01
Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.
The macromorphoscopic databank.
Hefner, Joseph T
2018-04-20
The development of identification standards in forensic anthropology requires large and appropriate reference samples comprising individuals with modern birth years. Recent advances in macromorphoscopic trait data collection and analysis have created a need for reference data for classification models and biological distance analyses. The Macromorphoscopic Databank (N ∼ 7,397) serves that function, making publicly available trait scores for a large sample (n = 2,363) of modern American populations and world-wide groups of various geographic origins (n = 1,790). In addition, the MaMD stores reference data for a large sample (n = 3,244) of pre-, proto- and historic Amerindian data, useful for biodistance studies and finer-levels of analysis during NAGPRA-related investigations and repatriations. In developing this database, particular attention was given to the level of classification needed during the estimation of ancestry in a forensic context. To fill the knowledge gap that currently exists in the analysis of these data, the following overview outlines many of the issues and their potential solutions. Developing valuable tools that are useful to other practitioners is the purpose of growing a databank. As the Macromorphoscopic Databank develops through data collection efforts and contributions from the field, its utility as a research and teaching tool will also mature, in turn creating a vital resource for forensic anthropologists for future generations. © 2018 Wiley Periodicals, Inc.
Automation of a Large Analytical Chemistry Laboratory
1990-12-01
Division Brooks Air Force Base , Texas 78235-5501 NOTICES When Government drawings, specifications, or other data are used for any purpose other than a...been reviewed and is approved for publication. Air Force installations may direct requests for copies of this report to: Air Force Occupational and...remaining for the analyses. Our laboratory serves worldwide Air Force installations and therefore comes up against these sample holding time requirements
1991-05-01
contact between averaging of the strong nuclear dipolar interaction the components will result at the interfacial region in this sample. In contrast, tho...and a sea marker to help save survivors $1.5 million for the institution in 1916, but of disasters at sea. A thermal diffusion process wartime delays...memory for large simulations on parallel intervening medium. Accomplishing this research array processors and immediate displays of results requires
NASA Astrophysics Data System (ADS)
Ibrahim, Dahi Ghareab Abdelsalam; Yasui, Takeshi
2018-04-01
Two-wavelength phase-shift interferometry guided by optical frequency combs is presented. We demonstrate the operation of the setup with a large step sample simultaneously with a resolution test target with a negative pattern. The technique can investigate multi-objects simultaneously with high precision. Using this technique, several important applications in metrology that require high speed and precision are demonstrated.
Code of Federal Regulations, 2014 CFR
2014-07-01
.... . . According to the followingrequirements. . . 1. Each metal melting furnace subject to a PM or total metal HAP... metal HAP performance test. iv. For cupola metal melting furnaces, sample PM or total metal HAP only during times when the cupola is on blast. v. For electric arc and electric induction metal melting...
Code of Federal Regulations, 2013 CFR
2013-07-01
.... . . According to the followingrequirements. . . 1. Each metal melting furnace subject to a PM or total metal HAP... metal HAP performance test. iv. For cupola metal melting furnaces, sample PM or total metal HAP only during times when the cupola is on blast. v. For electric arc and electric induction metal melting...
Code of Federal Regulations, 2012 CFR
2012-07-01
.... . . According to the followingrequirements. . . 1. Each metal melting furnace subject to a PM or total metal HAP... metal HAP performance test. iv. For cupola metal melting furnaces, sample PM or total metal HAP only during times when the cupola is on blast. v. For electric arc and electric induction metal melting...
Code of Federal Regulations, 2010 CFR
2010-07-01
.... . . According to the followingrequirements. . . 1. Each metal melting furnace subject to a PM or total metal HAP... metal HAP performance test. iv. For cupola metal melting furnaces, sample PM or total metal HAP only during times when the cupola is on blast. v. For electric arc and electric induction metal melting...
Code of Federal Regulations, 2011 CFR
2011-07-01
.... . . According to the followingrequirements. . . 1. Each metal melting furnace subject to a PM or total metal HAP... metal HAP performance test. iv. For cupola metal melting furnaces, sample PM or total metal HAP only during times when the cupola is on blast. v. For electric arc and electric induction metal melting...
Tian, Peng; Yang, David; Mandrell, Robert
2011-06-30
Human norovirus (NoV) outbreaks are major food safety concerns. The virus has to be concentrated from food samples in order to be detected. PEG precipitation is the most common method to recover the virus. Recently, histo-blood group antigens (HBGA) have been recognized as receptors for human NoV, and have been utilized as an alternative method to concentrate human NoV for samples up to 40 mL in volume. However, to wash off the virus from contaminated fresh food samples, at least 250 mL of wash volume is required. Recirculating affinity magnetic separation system (RCAMS) has been tried by others to concentrate human NoV from large-volume samples and failed to yield consistent results with the standard procedure of 30 min of recirculation at the default flow rate. Our work here demonstrates that proper recirculation time and flow rate are key factors for success in using the RCAMS. The bead recovery rate was increased from 28% to 47%, 67% and 90% when recirculation times were extended from 30 min to 60 min, 120 min and 180 min, respectively. The kinetics study suggests that at least 120 min recirculation is required to obtain a good recovery of NoV. In addition, different binding and elution conditions were compared for releasing NoV from inoculated lettuce. Phosphate-buffered saline (PBS) and water results in similar efficacy for virus release, but the released virus does not bind to RCAMS effectively unless pH was adjusted to acidic. Either citrate-buffered saline (CBS) wash, or water wash followed by CBS adjustment, resulted in an enhanced recovery of virus. We also demonstrated that the standard curve generated from viral RNA extracted from serially-diluted virus samples is more accurate for quantitative analysis than standard curves generated from serially-diluted plasmid DNA or transcribed-RNA templates, both of which tend to overestimate the concentration power. The efficacy of recovery of NoV from produce using RCAMS was directly compared with that of the PEG method in NoV inoculated lettuce. 40, 4, 0.4, and 0.04 RTU can be detected by both methods. At 0.004 RTU, NoV was detectable in all three samples concentrated by the RCAMS method, while none could be detected by the PEG precipitation method. RCAMS is a simple and rapid method that is more sensitive than conventional methods for recovery of NoV from food samples with a large sample size. In addition, the RTU value detected through RCAMS-processed samples is more biologically relevant. Published by Elsevier B.V.
Paleomagnetism of a primitive achondrite parent body: The acapulcoite-lodranites
NASA Astrophysics Data System (ADS)
Schnepf, N. R.; Weiss, B. P.; Andrade Lima, E.; Fu, R. R.; Uehara, M.; Gattacceca, J.; Wang, H.; Suavet, C. R.
2014-12-01
Primitive achondrites are a recently recognized meteorite grouping with textures and compositions intermediate between unmelted meteorites (chondrites) and igneous meteorites (achondrites). Their existence demonstrates prima facie that some planetesimals only experienced partial rather than complete melting. We present the first paleomagnetic measurements of acapulcoite-lodranite meteorites to determine the existence and intensity of ancient magnetic fields on their parent body. Our paleomagnetic study tests the hypothesis that their parent body had an advecting metallic core, with the goal of providing one of the first geophysical constraints on its large-scale structure and the extent of interior differentiation. In particular, by analyzing samples whose petrologic textures require an origin on a partially differentiated body, we will be able to critically test a recent proposal that some achondrites and chondrite groups could have originated on a single body (Weiss and Elkins-Tanton 2013). We analyzed samples of the meteorites Acapulco and Lodran. Like other acapulcoites and lodranites, these meteorites are granular rocks containing large (~0.1-0.3 mm) kamacite and taenite grains along with similarly sized silicate crystals. Many silicate grains contain numerous fine (1-10 μm) FeNi metal inclusions. Our compositional measurements and rock magnetic data suggest that tetrataenite is rare or absent. Bulk paleomagnetic measurements were done on four mutually oriented bulk samples of Acapulco and one bulk sample of Lodran. Alternating field (AF) demagnetization revealed that the magnetization of the bulk samples is highly unstable, likely due to the large (~0.1-0.3 mm) interstitial kamacite grains throughout the samples. To overcome this challenge, we are analyzing individual ~0.2 mm mutually oriented silicate grains extracted using a wire saw micromill. Preliminary SQUID microscopy measurements of a Lodran silicate grain suggest magnetization stable to AF levels of at least 25-40 mT.
Ultrasensitive multiplex optical quantification of bacteria in large samples of biofluids
Pazos-Perez, Nicolas; Pazos, Elena; Catala, Carme; Mir-Simon, Bernat; Gómez-de Pedro, Sara; Sagales, Juan; Villanueva, Carlos; Vila, Jordi; Soriano, Alex; García de Abajo, F. Javier; Alvarez-Puebla, Ramon A.
2016-01-01
Efficient treatments in bacterial infections require the fast and accurate recognition of pathogens, with concentrations as low as one per milliliter in the case of septicemia. Detecting and quantifying bacteria in such low concentrations is challenging and typically demands cultures of large samples of blood (~1 milliliter) extending over 24–72 hours. This delay seriously compromises the health of patients. Here we demonstrate a fast microorganism optical detection system for the exhaustive identification and quantification of pathogens in volumes of biofluids with clinical relevance (~1 milliliter) in minutes. We drive each type of bacteria to accumulate antibody functionalized SERS-labelled silver nanoparticles. Particle aggregation on the bacteria membranes renders dense arrays of inter-particle gaps in which the Raman signal is exponentially amplified by several orders of magnitude relative to the dispersed particles. This enables a multiplex identification of the microorganisms through the molecule-specific spectral fingerprints. PMID:27364357
Li, Jiang; Bifano, Thomas G.; Mertz, Jerome
2016-01-01
Abstract. We describe a wavefront sensor strategy for the implementation of adaptive optics (AO) in microscope applications involving thick, scattering media. The strategy is based on the exploitation of multiple scattering to provide oblique back illumination of the wavefront-sensor focal plane, enabling a simple and direct measurement of the flux-density tilt angles caused by aberrations at this plane. Advantages of the sensor are that it provides a large measurement field of view (FOV) while requiring no guide star, making it particularly adapted to a type of AO called conjugate AO, which provides a large correction FOV in cases when sample-induced aberrations arise from a single dominant plane (e.g., the sample surface). We apply conjugate AO here to widefield (i.e., nonscanning) fluorescence microscopy for the first time and demonstrate dynamic wavefront correction in a closed-loop implementation. PMID:27653793
Molecular dynamics based enhanced sampling of collective variables with very large time steps.
Chen, Pei-Yang; Tuckerman, Mark E
2018-01-14
Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.
Molecular dynamics based enhanced sampling of collective variables with very large time steps
NASA Astrophysics Data System (ADS)
Chen, Pei-Yang; Tuckerman, Mark E.
2018-01-01
Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.
2014-01-01
Background RNA sequencing (RNA-seq) is emerging as a critical approach in biological research. However, its high-throughput advantage is significantly limited by the capacity of bioinformatics tools. The research community urgently needs user-friendly tools to efficiently analyze the complicated data generated by high throughput sequencers. Results We developed a standalone tool with graphic user interface (GUI)-based analytic modules, known as eRNA. The capacity of performing parallel processing and sample management facilitates large data analyses by maximizing hardware usage and freeing users from tediously handling sequencing data. The module miRNA identification” includes GUIs for raw data reading, adapter removal, sequence alignment, and read counting. The module “mRNA identification” includes GUIs for reference sequences, genome mapping, transcript assembling, and differential expression. The module “Target screening” provides expression profiling analyses and graphic visualization. The module “Self-testing” offers the directory setups, sample management, and a check for third-party package dependency. Integration of other GUIs including Bowtie, miRDeep2, and miRspring extend the program’s functionality. Conclusions eRNA focuses on the common tools required for the mapping and quantification analysis of miRNA-seq and mRNA-seq data. The software package provides an additional choice for scientists who require a user-friendly computing environment and high-throughput capacity for large data analysis. eRNA is available for free download at https://sourceforge.net/projects/erna/?source=directory. PMID:24593312
Yuan, Tiezheng; Huang, Xiaoyi; Dittmar, Rachel L; Du, Meijun; Kohli, Manish; Boardman, Lisa; Thibodeau, Stephen N; Wang, Liang
2014-03-05
RNA sequencing (RNA-seq) is emerging as a critical approach in biological research. However, its high-throughput advantage is significantly limited by the capacity of bioinformatics tools. The research community urgently needs user-friendly tools to efficiently analyze the complicated data generated by high throughput sequencers. We developed a standalone tool with graphic user interface (GUI)-based analytic modules, known as eRNA. The capacity of performing parallel processing and sample management facilitates large data analyses by maximizing hardware usage and freeing users from tediously handling sequencing data. The module miRNA identification" includes GUIs for raw data reading, adapter removal, sequence alignment, and read counting. The module "mRNA identification" includes GUIs for reference sequences, genome mapping, transcript assembling, and differential expression. The module "Target screening" provides expression profiling analyses and graphic visualization. The module "Self-testing" offers the directory setups, sample management, and a check for third-party package dependency. Integration of other GUIs including Bowtie, miRDeep2, and miRspring extend the program's functionality. eRNA focuses on the common tools required for the mapping and quantification analysis of miRNA-seq and mRNA-seq data. The software package provides an additional choice for scientists who require a user-friendly computing environment and high-throughput capacity for large data analysis. eRNA is available for free download at https://sourceforge.net/projects/erna/?source=directory.
Wang, Yu Annie; Wu, Di; Auclair, Jared R; Salisbury, Joseph P; Sarin, Richa; Tang, Yang; Mozdzierz, Nicholas J; Shah, Kartik; Zhang, Anna Fan; Wu, Shiaw-Lin; Agar, Jeffery N; Love, J Christopher; Love, Kerry R; Hancock, William S
2017-12-05
With the advent of biosimilars to the U.S. market, it is important to have better analytical tools to ensure product quality from batch to batch. In addition, the recent popularity of using a continuous process for production of biopharmaceuticals, the traditional bottom-up method, alone for product characterization and quality analysis is no longer sufficient. Bottom-up method requires large amounts of material for analysis and is labor-intensive and time-consuming. Additionally, in this analysis, digestion of the protein with enzymes such as trypsin could induce artifacts and modifications which would increase the complexity of the analysis. On the other hand, a top-down method requires a minimum amount of sample and allows for analysis of the intact protein mass and sequence generated from fragmentation within the instrument. However, fragmentation usually occurs at the N-terminal and C-terminal ends of the protein with less internal fragmentation. Herein, we combine the use of the complementary techniques, a top-down and bottom-up method, for the characterization of human growth hormone degradation products. Notably, our approach required small amounts of sample, which is a requirement due to the sample constraints of small scale manufacturing. Using this approach, we were able to characterize various protein variants, including post-translational modifications such as oxidation and deamidation, residual leader sequence, and proteolytic cleavage. Thus, we were able to highlight the complementarity of top-down and bottom-up approaches, which achieved the characterization of a wide range of product variants in samples of human growth hormone secreted from Pichia pastoris.
A search for extraterrestrial amino acids in carbonaceous Antarctic micrometeorites
NASA Technical Reports Server (NTRS)
Brinton, K. L.; Engrand, C.; Glavin, D. P.; Bada, J. L.; Maurette, M.
1998-01-01
Antarctic micrometeorites (AMMs) in the 100-400 microns size range are the dominant mass fraction of extraterrestrial material accreted by the Earth today. A high performance liquid chromatography (HPLC) based technique exploited at the limits of sensitivity has been used to search for the extraterrestrial amino acids alpha-aminoisobutyric acid (AIB) and isovaline in AMMs. Five samples, each containing about 30 to 35 grains, were analyzed. All the samples possess a terrestrial amino acid component, indicated by the excess of the L-enantiomers of common protein amino acids. In only one sample (A91) was AIB found to be present at a level significantly above the background blanks. The concentration of AIB (approximately 280 ppm), and the AIB/isovaline ratio (> or = 10), in this sample are both much higher than in CM chondrites. The apparently large variation in the AIB concentrations of the samples suggests that AIB may be concentrated in rare subset of micrometeorites. Because the AIB/isovaline ratio in sample A91 is much larger than in CM chondrites, the synthesis of amino acids in the micrometeorite parent bodies might have involved a different process requiring an HCN-rich environment, such as that found in comets. If the present day characteristics of the meteorite and micrometeorite fluxes can be extrapolated back in time, then the flux of large carbonaceous micrometeorites could have contributed to the inventory of prebiotic molecules on the early Earth.
A search for extraterrestrial amino acids in carbonaceous Antarctic micrometeorites.
Brinton, K L; Engrand, C; Glavin, D P; Bada, J L; Maurette, M
1998-10-01
Antarctic micrometeorites (AMMs) in the 100-400 microns size range are the dominant mass fraction of extraterrestrial material accreted by the Earth today. A high performance liquid chromatography (HPLC) based technique exploited at the limits of sensitivity has been used to search for the extraterrestrial amino acids alpha-aminoisobutyric acid (AIB) and isovaline in AMMs. Five samples, each containing about 30 to 35 grains, were analyzed. All the samples possess a terrestrial amino acid component, indicated by the excess of the L-enantiomers of common protein amino acids. In only one sample (A91) was AIB found to be present at a level significantly above the background blanks. The concentration of AIB (approximately 280 ppm), and the AIB/isovaline ratio (> or = 10), in this sample are both much higher than in CM chondrites. The apparently large variation in the AIB concentrations of the samples suggests that AIB may be concentrated in rare subset of micrometeorites. Because the AIB/isovaline ratio in sample A91 is much larger than in CM chondrites, the synthesis of amino acids in the micrometeorite parent bodies might have involved a different process requiring an HCN-rich environment, such as that found in comets. If the present day characteristics of the meteorite and micrometeorite fluxes can be extrapolated back in time, then the flux of large carbonaceous micrometeorites could have contributed to the inventory of prebiotic molecules on the early Earth.
Influenza A Virus Isolation, Culture and Identification
Eisfeld, Amie J.; Neumann, Gabriele; Kawaoka, Yoshihiro
2017-01-01
SUMMARY Influenza A viruses (IAV) cause epidemics and pandemics that result in considerable financial burden and loss of human life. To manage annual IAV epidemics and prepare for future pandemics, improved understanding of how IAVs emerge, transmit, cause disease, and acquire pandemic potential is urgently needed. Fundamental techniques essential for procuring such knowledge are IAV isolation and culture from experimental and surveillance samples. Here, we present a detailed protocol for IAV sample collection and processing, amplification in chicken eggs and mammalian cells, and identification from samples containing unknown pathogens. This protocol is robust, and allows for generation of virus cultures that can be used for downstream analyses. Once experimental or surveillance samples are obtained, virus cultures can be generated and the presence of IAV can be verified in 3–5 days. Increased time-frames may be required for less experienced laboratory personnel, or when large numbers of samples will be processed. PMID:25321410
Statistical inference involving binomial and negative binomial parameters.
García-Pérez, Miguel A; Núñez-Antón, Vicente
2009-05-01
Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.
Fast Ordered Sampling of DNA Sequence Variants.
Greenberg, Anthony J
2018-05-04
Explosive growth in the amount of genomic data is matched by increasing power of consumer-grade computers. Even applications that require powerful servers can be quickly tested on desktop or laptop machines if we can generate representative samples from large data sets. I describe a fast and memory-efficient implementation of an on-line sampling method developed for tape drives 30 years ago. Focusing on genotype files, I test the performance of this technique on modern solid-state and spinning hard drives, and show that it performs well compared to a simple sampling scheme. I illustrate its utility by developing a method to quickly estimate genome-wide patterns of linkage disequilibrium (LD) decay with distance. I provide open-source software that samples loci from several variant format files, a separate program that performs LD decay estimates, and a C++ library that lets developers incorporate these methods into their own projects. Copyright © 2018 Greenberg.
Puzzle Imaging: Using Large-Scale Dimensionality Reduction Algorithms for Localization
Glaser, Joshua I.; Zamft, Bradley M.; Church, George M.; Kording, Konrad P.
2015-01-01
Current high-resolution imaging techniques require an intact sample that preserves spatial relationships. We here present a novel approach, “puzzle imaging,” that allows imaging a spatially scrambled sample. This technique takes many spatially disordered samples, and then pieces them back together using local properties embedded within the sample. We show that puzzle imaging can efficiently produce high-resolution images using dimensionality reduction algorithms. We demonstrate the theoretical capabilities of puzzle imaging in three biological scenarios, showing that (1) relatively precise 3-dimensional brain imaging is possible; (2) the physical structure of a neural network can often be recovered based only on the neural connectivity matrix; and (3) a chemical map could be reproduced using bacteria with chemosensitive DNA and conjugative transfer. The ability to reconstruct scrambled images promises to enable imaging based on DNA sequencing of homogenized tissue samples. PMID:26192446
Kumar, B; Han, L-F; Wassenaar, L I; Klaus, P M; Kainz, G G; Hillegonds, D; Brummer, D; Ahmad, M; Belachew, D L; Araguás, L; Aggarwal, P
2016-12-01
Tritium ( 3 H) in natural waters is a powerful tracer of hydrological processes, but its low concentrations require electrolytic enrichment before precise measurements can be made with a liquid scintillation counter. Here, we describe a newly developed, compact tritium enrichment unit which can be used to enrich up to 2L of a water sample. This allows a high enrichment factor (>100) for measuring low 3 H contents of <0.05TU. The TEU uses a small cell (250mL) with automated re-filling and a CO 2 bubbling technique to neutralize the high alkalinity of enriched samples. The enriched residual sample is retrieved from the cell under vacuum by cryogenic distillation at -20°C and the tritium enrichment factor for each sample is accurately determined by measuring pre- and post- enrichment 2 H concentrations with laser spectrometry. Copyright © 2016. Published by Elsevier Ltd.
Loukas, Christos-Moritz; Mowlem, Matthew C; Tsaloglou, Maria-Nefeli; Green, Nicolas G
2018-05-01
This paper presents a novel portable sample filtration/concentration system, designed for use on samples of microorganisms with very low cell concentrations and large volumes, such as water-borne parasites, pathogens associated with faecal matter, or toxic phytoplankton. The example application used for demonstration was the in-field collection and concentration of microalgae from seawater samples. This type of organism is responsible for Harmful Algal Blooms (HABs), an example of which is commonly referred to as "red tides", which are typically the result of rapid proliferation and high biomass accumulation of harmful microalgal species in the water column or at the sea surface. For instance, Karenia brevis red tides are the cause of aquatic organism mortality and persistent blooms may cause widespread die-offs of populations of other organisms including vertebrates. In order to respond to, and adequately manage HABs, monitoring of toxic microalgae is required and large-volume sample concentrators would be a useful tool for in situ monitoring of HABs. The filtering system presented in this work enables consistent sample collection and concentration from 1 L to 1 mL in five minutes, allowing for subsequent benchtop sample extraction and analysis using molecular methods such as NASBA and IC-NASBA. The microalga Tetraselmis suecica was successfully detected at concentrations ranging from 2 × 10 5 cells/L to 20 cells/L. Karenia brevis was also detected and quantified at concentrations between 10 cells/L and 10 6 cells/L. Further analysis showed that the filter system, which concentrates cells from very large volumes with consequently more reliable sampling, produced samples that were more consistent than the independent non-filtered samples (benchtop controls), with a logarithmic dependency on increasing cell numbers. This filtering system provides simple, rapid, and consistent sample collection and concentration for further analysis, and could be applied to a wide range of different samples and target organisms in situations lacking laboratories. Copyright © 2018. Published by Elsevier B.V.
Wright, Mark H.; Tung, Chih-Wei; Zhao, Keyan; Reynolds, Andy; McCouch, Susan R.; Bustamante, Carlos D.
2010-01-01
Motivation: The development of new high-throughput genotyping products requires a significant investment in testing and training samples to evaluate and optimize the product before it can be used reliably on new samples. One reason for this is current methods for automated calling of genotypes are based on clustering approaches which require a large number of samples to be analyzed simultaneously, or an extensive training dataset to seed clusters. In systems where inbred samples are of primary interest, current clustering approaches perform poorly due to the inability to clearly identify a heterozygote cluster. Results: As part of the development of two custom single nucleotide polymorphism genotyping products for Oryza sativa (domestic rice), we have developed a new genotype calling algorithm called ‘ALCHEMY’ based on statistical modeling of the raw intensity data rather than modelless clustering. A novel feature of the model is the ability to estimate and incorporate inbreeding information on a per sample basis allowing accurate genotyping of both inbred and heterozygous samples even when analyzed simultaneously. Since clustering is not used explicitly, ALCHEMY performs well on small sample sizes with accuracy exceeding 99% with as few as 18 samples. Availability: ALCHEMY is available for both commercial and academic use free of charge and distributed under the GNU General Public License at http://alchemy.sourceforge.net/ Contact: mhw6@cornell.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20926420
Application of PLE for the determination of essential oil components from Thymus vulgaris L.
Dawidowicz, Andrzej L; Rado, Ewelina; Wianowska, Dorota; Mardarowicz, Marek; Gawdzik, Jan
2008-08-15
Essential plants, due to their long presence in human history, their status in culinary arts, their use in medicine and perfume manufacture, belong to frequently examined stock materials in scientific and industrial laboratories. Because of a large number of freshly cut, dried or frozen plant samples requiring the determination of essential oil amount and composition, a fast, safe, simple, efficient and highly automatic sample preparation method is needed. Five sample preparation methods (steam distillation, extraction in the Soxhlet apparatus, supercritical fluid extraction, solid phase microextraction and pressurized liquid extraction) used for the isolation of aroma-active components from Thymus vulgaris L. are compared in the paper. The methods are mainly discussed with regard to the recovery of components which typically exist in essential oil isolated by steam distillation. According to the obtained data, PLE is the most efficient sample preparation method in determining the essential oil from the thyme herb. Although co-extraction of non-volatile ingredients is the main drawback of this method, it is characterized by the highest yield of essential oil components and the shortest extraction time required. Moreover, the relative peak amounts of essential components revealed by PLE are comparable with those obtained by steam distillation, which is recognized as standard sample preparation method for the analysis of essential oils in aromatic plants.
Chang, Yu-Wei; Tsong, Yi; Zhao, Zhigen
2017-01-01
Assessing equivalence or similarity has drawn much attention recently as many drug products have lost or will lose their patents in the next few years, especially certain best-selling biologics. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is well established from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a three-arm equivalence trial. The primary endpoints of a clinical trial may not always be continuous, but may be discrete. In this paper, the authors derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints. In addition, the authors examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, the authors demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter.
A LOW-E MAGIC ANGLE SPINNING PROBE FOR BIOLOGICAL SOLID STATE NMR AT 750 MHz
McNeill, Seth A.; Gor’kov, Peter L.; Shetty, Kiran; Brey, William W.; Long, Joanna R.
2009-01-01
Crossed-coil NMR probes are a useful tool for reducing sample heating for biological solid state NMR. In a crossed-coil probe, the higher frequency 1H field, which is the primary source of sample heating in conventional probes, is produced by a separate low-inductance resonator. Because a smaller driving voltage is required, the electric field across the sample and the resultant heating is reduced. In this work we describe the development of a magic angle spinning (MAS) solid state NMR probe utilizing a dual resonator. This dual resonator approach, referred to as “Low-E,” was originally developed to reduce heating in samples of mechanically aligned membranes. The study of inherently dilute systems, such as proteins in lipid bilayers, via MAS techniques requires large sample volumes at high field to obtain spectra with adequate signal-to-noise ratio under physiologically relevant conditions. With the Low-E approach, we are able to obtain homogeneous and sufficiently strong radiofrequency fields for both 1H and 13C frequencies in a 4 mm probe with a 1H frequency of 750 MHz. The performance of the probe using windowless dipolar recoupling sequences is demonstrated on model compounds as well as membrane embedded peptides. PMID:19138870
Detecting Superior Face Recognition Skills in a Large Sample of Young British Adults
Bobak, Anna K.; Pampoulov, Philip; Bate, Sarah
2016-01-01
The Cambridge Face Memory Test Long Form (CFMT+) and Cambridge Face Perception Test (CFPT) are typically used to assess the face processing ability of individuals who believe they have superior face recognition skills. Previous large-scale studies have presented norms for the CFPT but not the CFMT+. However, previous research has also highlighted the necessity for establishing country-specific norms for these tests, indicating that norming data is required for both tests using young British adults. The current study addressed this issue in 254 British participants. In addition to providing the first norm for performance on the CFMT+ in any large sample, we also report the first UK specific cut-off for superior face recognition on the CFPT. Further analyses identified a small advantage for females on both tests, and only small associations between objective face recognition skills and self-report measures. A secondary aim of the study was to examine the relationship between trait or social anxiety and face processing ability, and no associations were noted. The implications of these findings for the classification of super-recognizers are discussed. PMID:27713706
Hexamethyldisilazane Removal with Mesoporous Materials Prepared from Calcium Fluoride Sludge.
Kao, Ching-Yang; Lin, Min-Fa; Nguyen, Nhat-Thien; Tsai, Hsiao-Hsin; Chang, Luh-Maan; Chen, Po-Han; Chang, Chang-Tang
2018-05-01
A large amount of calcium fluoride sludge is generated by the semiconductor industry every year. It also requires a high amount of fuel consumption using rotor concentrators and thermal oxidizers to treat VOCs. The mesoporous adsorbent prepared by calcium fluoride sludge was used for VOCs treatment. The semiconductor industry employs HMDS to promote the adhesion of photo-resistant material to oxide(s) due to the formation of silicon dioxide, which blocks porous adsorbents. The adsorption of HMDS (Hexamethyldisiloxane) was tested with mesoporous silica materials synthesized from calcium fluoride (CF-MCM). The resulting samples were characterized by XRD, XRF, FTIR, N2-adsorption-desorption techniques. The prepared samples possessed high specific surface area, large pore volume and large pore diameter. The crystal patterns of CF-MCM were similar with Mobil composite matter (MCM-41) from TEM image. The adsorption capacity of HMDS with CF-MCM was 40 and 80 mg g-1, respectively, under 100 and 500 ppm HMDS. The effects of operation parameters, such as contact time and mixture concentration, on the performance of CF-MCM were also discussed in this study.
Petrologic constraints on the origin of the Moon: Evidence from Apollo 14
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shervais, J.W.; Taylor, L.A.
1984-01-01
The Fra Mauro breccias at Apollo 14 contain distinctive suites of mare basalts and highland crustal rocks that contrast significantly with equivalent rocks from other Apollo sites. These contrasts imply lateral heterogeneity of the lunar crust and mantle on a regional scale. This heterogeneity may date back to the earliest stages of lunar accretion and differentiation. Current theories requiring a Moon-wide crust of Ferroan Anorthosite are based largely on samples from Apollo 16, where all but a few samples represent the FAN suite. However, at the nearside sites, FAN is either scarce (A-15) or virtually absent (A-12, A-14, A-17). Itmore » is suggested that the compositional variations could be accounted for by the acceleration of a large mass of material (e.g., 0.1 to 0.2 moon masses) late in the crystallization history of the magma ocean. Besides adding fresh, primordial material, this would remelt a large pocket of crust and mantle, thereby allowing a second distillation to occur in the resulting magma sea.« less
Event-driven processing for hardware-efficient neural spike sorting
NASA Astrophysics Data System (ADS)
Liu, Yan; Pereira, João L.; Constandinou, Timothy G.
2018-02-01
Objective. The prospect of real-time and on-node spike sorting provides a genuine opportunity to push the envelope of large-scale integrated neural recording systems. In such systems the hardware resources, power requirements and data bandwidth increase linearly with channel count. Event-based (or data-driven) processing can provide here a new efficient means for hardware implementation that is completely activity dependant. In this work, we investigate using continuous-time level-crossing sampling for efficient data representation and subsequent spike processing. Approach. (1) We first compare signals (synthetic neural datasets) encoded with this technique against conventional sampling. (2) We then show how such a representation can be directly exploited by extracting simple time domain features from the bitstream to perform neural spike sorting. (3) The proposed method is implemented in a low power FPGA platform to demonstrate its hardware viability. Main results. It is observed that considerably lower data rates are achievable when using 7 bits or less to represent the signals, whilst maintaining the signal fidelity. Results obtained using both MATLAB and reconfigurable logic hardware (FPGA) indicate that feature extraction and spike sorting accuracies can be achieved with comparable or better accuracy than reference methods whilst also requiring relatively low hardware resources. Significance. By effectively exploiting continuous-time data representation, neural signal processing can be achieved in a completely event-driven manner, reducing both the required resources (memory, complexity) and computations (operations). This will see future large-scale neural systems integrating on-node processing in real-time hardware.
Improved argument-FFT frequency offset estimation for QPSK coherent optical Systems
NASA Astrophysics Data System (ADS)
Han, Jilong; Li, Wei; Yuan, Zhilin; Li, Haitao; Huang, Liyan; Hu, Qianggao
2016-02-01
A frequency offset estimation (FOE) algorithm based on fast Fourier transform (FFT) of the signal's argument is investigated, which does not require removing the modulated data phase. In this paper, we analyze the flaw of the argument-FFT algorithm and propose a combined FOE algorithm, in which the absolute of frequency offset (FO) is accurately calculated by argument-FFT algorithm with a relatively large number of samples and the sign of FO is determined by FFT-based interpolation discrete Fourier transformation (DFT) algorithm with a relatively small number of samples. Compared with the previous algorithms based on argument-FFT, the proposed one has low complexity and can still effectively work with a relatively less number of samples.
Sampling the sound field in auditoria using large natural-scale array measurements.
Witew, Ingo B; Vorländer, Michael; Xiang, Ning
2017-03-01
Suitable data for spatial wave field analyses in concert halls need to satisfy the sampling theorem and hence requires densely spaced measurement positions over extended regions. The described measurement apparatus is capable of automatically sampling the sound field in auditoria over a surface of 5.30 m × 8.00 m to any appointed resolutions. In addition to discussing design features, a case study based on measured impulse responses is presented. The experimental data allow wave field animations demonstrating how sound propagating at grazing incidence over theater seating is scattered from rows of chairs (seat-dip effect). The visualized data of reflections and scattering from an auditorium's boundaries give insights and opportunities for advanced analyses.
Enhanced Conformational Sampling Using Replica Exchange with Collective-Variable Tempering.
Gil-Ley, Alejandro; Bussi, Giovanni
2015-03-10
The computational study of conformational transitions in RNA and proteins with atomistic molecular dynamics often requires suitable enhanced sampling techniques. We here introduce a novel method where concurrent metadynamics are integrated in a Hamiltonian replica-exchange scheme. The ladder of replicas is built with different strengths of the bias potential exploiting the tunability of well-tempered metadynamics. Using this method, free-energy barriers of individual collective variables are significantly reduced compared with simple force-field scaling. The introduced methodology is flexible and allows adaptive bias potentials to be self-consistently constructed for a large number of simple collective variables, such as distances and dihedral angles. The method is tested on alanine dipeptide and applied to the difficult problem of conformational sampling in a tetranucleotide.
Nanoengineered capsules for selective SERS analysis of biological samples
NASA Astrophysics Data System (ADS)
You, Yil-Hwan; Schechinger, Monika; Locke, Andrea; Coté, Gerard; McShane, Mike
2018-02-01
Metal nanoparticles conjugated with DNA oligomers have been intensively studied for a variety of applications, including optical diagnostics. Assays based on aggregation of DNA-coated particles in proportion to the concentration of target analyte have not been widely adopted for clinical analysis, however, largely due to the nonspecific responses observed in complex biofluids. While sample pre-preparation such as dialysis is helpful to enable selective sensing, here we sought to prove that assay encapsulation in hollow microcapsules could remove this requirement and thereby facilitate more rapid analysis on complex samples. Gold nanoparticle-based assays were incorporated into capsules comprising polyelectrolyte multilayer (PEMs), and the response to small molecule targets and larger proteins were compared. Gold nanoparticles were able to selectively sense small Raman dyes (Rhodamine 6G) in the presence of large protein molecules (BSA) when encapsulated. A ratiometric based microRNA-17 sensing assay exhibited drastic reduction in response after encapsulation, with statistically-significant relative Raman intensity changes only at a microRNA-17 concentration of 10 nM compared to a range of 0-500 nM for the corresponding solution-phase response.
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
Cuadros-Inostroza, Alvaro; Caldana, Camila; Redestig, Henning; Kusano, Miyako; Lisec, Jan; Peña-Cortés, Hugo; Willmitzer, Lothar; Hannah, Matthew A
2009-12-16
Metabolite profiling, the simultaneous quantification of multiple metabolites in an experiment, is becoming increasingly popular, particularly with the rise of systems-level biology. The workhorse in this field is gas-chromatography hyphenated with mass spectrometry (GC-MS). The high-throughput of this technology coupled with a demand for large experiments has led to data pre-processing, i.e. the quantification of metabolites across samples, becoming a major bottleneck. Existing software has several limitations, including restricted maximum sample size, systematic errors and low flexibility. However, the biggest limitation is that the resulting data usually require extensive hand-curation, which is subjective and can typically take several days to weeks. We introduce the TargetSearch package, an open source tool which is a flexible and accurate method for pre-processing even very large numbers of GC-MS samples within hours. We developed a novel strategy to iteratively correct and update retention time indices for searching and identifying metabolites. The package is written in the R programming language with computationally intensive functions written in C for speed and performance. The package includes a graphical user interface to allow easy use by those unfamiliar with R. TargetSearch allows fast and accurate data pre-processing for GC-MS experiments and overcomes the sample number limitations and manual curation requirements of existing software. We validate our method by carrying out an analysis against both a set of known chemical standard mixtures and of a biological experiment. In addition we demonstrate its capabilities and speed by comparing it with other GC-MS pre-processing tools. We believe this package will greatly ease current bottlenecks and facilitate the analysis of metabolic profiling data.
2009-01-01
Background Metabolite profiling, the simultaneous quantification of multiple metabolites in an experiment, is becoming increasingly popular, particularly with the rise of systems-level biology. The workhorse in this field is gas-chromatography hyphenated with mass spectrometry (GC-MS). The high-throughput of this technology coupled with a demand for large experiments has led to data pre-processing, i.e. the quantification of metabolites across samples, becoming a major bottleneck. Existing software has several limitations, including restricted maximum sample size, systematic errors and low flexibility. However, the biggest limitation is that the resulting data usually require extensive hand-curation, which is subjective and can typically take several days to weeks. Results We introduce the TargetSearch package, an open source tool which is a flexible and accurate method for pre-processing even very large numbers of GC-MS samples within hours. We developed a novel strategy to iteratively correct and update retention time indices for searching and identifying metabolites. The package is written in the R programming language with computationally intensive functions written in C for speed and performance. The package includes a graphical user interface to allow easy use by those unfamiliar with R. Conclusions TargetSearch allows fast and accurate data pre-processing for GC-MS experiments and overcomes the sample number limitations and manual curation requirements of existing software. We validate our method by carrying out an analysis against both a set of known chemical standard mixtures and of a biological experiment. In addition we demonstrate its capabilities and speed by comparing it with other GC-MS pre-processing tools. We believe this package will greatly ease current bottlenecks and facilitate the analysis of metabolic profiling data. PMID:20015393
Use of COTS Batteries on ISS and Shuttle
NASA Technical Reports Server (NTRS)
Jeevarajan, Judith A.
2004-01-01
This presentation focuses on COTS Battery testing for energy content, toxicity, hazards, failures modes and controls for different battery chemistries. It also discusses the current program requirements, challenges with COTS Batteries in manned vehicle COTS methodology, JSC test details, and gives a list of incidents from consumer protection safety commissions. The Battery test process involved testing new batteries for engineering certification, qualification of batteries, flight acceptance, cell and battery, environment, performance and abuse. Their conclusions and recommendations were that: high risk is undertaken with the use of COTS batteries, hazard control verification is required to allow the use of these batteries on manned space flights, failures during use cannot be understood if different scenarios of failure are not tested on the ground, and that testing is performed on small sample numbers due to restrictions on cost and time. They recommend testing of large sample size to gain more confidence in the operation of the hazard controls.
Comparison of attrition test methods: ASTM standard fluidized bed vs jet cup
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, R.; Goodwin, J.G. Jr.; Jothimurugesan, K.
2000-05-01
Attrition resistance is one of the key design parameters for catalysts used in fluidized-bed and slurry phase types of reactors. The ASTM fluidized-bed test has been one of the most commonly used attrition resistance evaluation methods; however, it requires the use of 50 g samples--a large amount for catalyst development studies. Recently a test using the jet cup requiring only 5 g samples has been proposed. In the present study, two series of spray-dried iron catalysts were evaluated using both the ASTM fluidized-bed test and a test based on the jet cup to determine this comparability. It is shown thatmore » the two tests give comparable results. This paper, by reporting a comparison of the jet-cup test with the ASTM standard, provides a basis for utilizing the more efficient jet cup with confidence in catalyst attrition studies.« less
Gebauer, Roman; Řepka, Radomír; Šmudla, Radek; Mamoňová, Miroslava; Ďurkovič, Jaroslav
2016-01-01
Although spine variation within cacti species or populations is assumed to be large, the minimum sample size of different spine anatomical and morphological traits required for species description is less studied. There are studies where only 2 spines were used for taxonomical comparison amnog species. Therefore, the spine structure variation within areoles and individuals of one population of Gymnocalycium kieslingii subsp. castaneum (Ferrari) Slaba was analyzed. Fifteen plants were selected and from each plant one areole from the basal, middle and upper part of the plant body was sampled. A scanning electron microscopy was used for spine surface description and a light microscopy for measurements of spine width, thickness, cross-section area, fiber diameter and fiber cell wall thickness. The spine surface was more visible and damaged less in the upper part of the plant body than in the basal part. Large spine and fiber differences were found between upper and lower parts of the plant body, but also within single areoles. In general, the examined traits in the upper part had by 8-17% higher values than in the lower parts. The variation of spine and fiber traits within areoles was lower than the differences between individuals. The minimum sample size was largely influenced by the studied spine and fiber traits, ranging from 1 to 70 spines. The results provide pioneer information useful in spine sample collection in the field for taxonomical, biomechanical and structural studies. Nevertheless, similar studies should be carried out for other cacti species to make generalizations. The large spine and fiber variation within areoles observed in our study indicates a very complex spine morphogenesis.
Gebauer, Roman; Řepka, Radomír; Šmudla, Radek; Mamoňová, Miroslava; Ďurkovič, Jaroslav
2016-01-01
Abstract Although spine variation within cacti species or populations is assumed to be large, the minimum sample size of different spine anatomical and morphological traits required for species description is less studied. There are studies where only 2 spines were used for taxonomical comparison amnog species. Therefore, the spine structure variation within areoles and individuals of one population of Gymnocalycium kieslingii subsp. castaneum (Ferrari) Slaba was analyzed. Fifteen plants were selected and from each plant one areole from the basal, middle and upper part of the plant body was sampled. A scanning electron microscopy was used for spine surface description and a light microscopy for measurements of spine width, thickness, cross-section area, fiber diameter and fiber cell wall thickness. The spine surface was more visible and damaged less in the upper part of the plant body than in the basal part. Large spine and fiber differences were found between upper and lower parts of the plant body, but also within single areoles. In general, the examined traits in the upper part had by 8–17% higher values than in the lower parts. The variation of spine and fiber traits within areoles was lower than the differences between individuals. The minimum sample size was largely influenced by the studied spine and fiber traits, ranging from 1 to 70 spines. The results provide pioneer information useful in spine sample collection in the field for taxonomical, biomechanical and structural studies. Nevertheless, similar studies should be carried out for other cacti species to make generalizations. The large spine and fiber variation within areoles observed in our study indicates a very complex spine morphogenesis. PMID:27698579
Recognition Using Hybrid Classifiers.
Osadchy, Margarita; Keren, Daniel; Raviv, Dolev
2016-04-01
A canonical problem in computer vision is category recognition (e.g., find all instances of human faces, cars etc., in an image). Typically, the input for training a binary classifier is a relatively small sample of positive examples, and a huge sample of negative examples, which can be very diverse, consisting of images from a large number of categories. The difficulty of the problem sharply increases with the dimension and size of the negative example set. We propose to alleviate this problem by applying a "hybrid" classifier, which replaces the negative samples by a prior, and then finds a hyperplane which separates the positive samples from this prior. The method is extended to kernel space and to an ensemble-based approach. The resulting binary classifiers achieve an identical or better classification rate than SVM, while requiring far smaller memory and lower computational complexity to train and apply.
Radiometric 81Kr dating identifies 120,000-year-old ice at Taylor Glacier, Antarctica
Buizert, Christo; Baggenstos, Daniel; Jiang, Wei; Purtschert, Roland; Petrenko, Vasilii V.; Lu, Zheng-Tian; Müller, Peter; Kuhl, Tanner; Lee, James; Severinghaus, Jeffrey P.; Brook, Edward J.
2014-01-01
We present successful 81Kr-Kr radiometric dating of ancient polar ice. Krypton was extracted from the air bubbles in four ∼350-kg polar ice samples from Taylor Glacier in the McMurdo Dry Valleys, Antarctica, and dated using Atom Trap Trace Analysis (ATTA). The 81Kr radiometric ages agree with independent age estimates obtained from stratigraphic dating techniques with a mean absolute age offset of 6 ± 2.5 ka. Our experimental methods and sampling strategy are validated by (i) 85Kr and 39Ar analyses that show the samples to be free of modern air contamination and (ii) air content measurements that show the ice did not experience gas loss. We estimate the error in the 81Kr ages due to past geomagnetic variability to be below 3 ka. We show that ice from the previous interglacial period (Marine Isotope Stage 5e, 130–115 ka before present) can be found in abundance near the surface of Taylor Glacier. Our study paves the way for reliable radiometric dating of ancient ice in blue ice areas and margin sites where large samples are available, greatly enhancing their scientific value as archives of old ice and meteorites. At present, ATTA 81Kr analysis requires a 40–80-kg ice sample; as sample requirements continue to decrease, 81Kr dating of ice cores is a future possibility. PMID:24753606
Radiometric 81Kr dating identifies 120,000-year-old ice at Taylor Glacier, Antarctica.
Buizert, Christo; Baggenstos, Daniel; Jiang, Wei; Purtschert, Roland; Petrenko, Vasilii V; Lu, Zheng-Tian; Müller, Peter; Kuhl, Tanner; Lee, James; Severinghaus, Jeffrey P; Brook, Edward J
2014-05-13
We present successful (81)Kr-Kr radiometric dating of ancient polar ice. Krypton was extracted from the air bubbles in four ∼350-kg polar ice samples from Taylor Glacier in the McMurdo Dry Valleys, Antarctica, and dated using Atom Trap Trace Analysis (ATTA). The (81)Kr radiometric ages agree with independent age estimates obtained from stratigraphic dating techniques with a mean absolute age offset of 6 ± 2.5 ka. Our experimental methods and sampling strategy are validated by (i) (85)Kr and (39)Ar analyses that show the samples to be free of modern air contamination and (ii) air content measurements that show the ice did not experience gas loss. We estimate the error in the (81)Kr ages due to past geomagnetic variability to be below 3 ka. We show that ice from the previous interglacial period (Marine Isotope Stage 5e, 130-115 ka before present) can be found in abundance near the surface of Taylor Glacier. Our study paves the way for reliable radiometric dating of ancient ice in blue ice areas and margin sites where large samples are available, greatly enhancing their scientific value as archives of old ice and meteorites. At present, ATTA (81)Kr analysis requires a 40-80-kg ice sample; as sample requirements continue to decrease, (81)Kr dating of ice cores is a future possibility.
Image analysis of representative food structures: application of the bootstrap method.
Ramírez, Cristian; Germain, Juan C; Aguilera, José M
2009-08-01
Images (for example, photomicrographs) are routinely used as qualitative evidence of the microstructure of foods. In quantitative image analysis it is important to estimate the area (or volume) to be sampled, the field of view, and the resolution. The bootstrap method is proposed to estimate the size of the sampling area as a function of the coefficient of variation (CV(Bn)) and standard error (SE(Bn)) of the bootstrap taking sub-areas of different sizes. The bootstrap method was applied to simulated and real structures (apple tissue). For simulated structures, 10 computer-generated images were constructed containing 225 black circles (elements) and different coefficient of variation (CV(image)). For apple tissue, 8 images of apple tissue containing cellular cavities with different CV(image) were analyzed. Results confirmed that for simulated and real structures, increasing the size of the sampling area decreased the CV(Bn) and SE(Bn). Furthermore, there was a linear relationship between the CV(image) and CV(Bn) (.) For example, to obtain a CV(Bn) = 0.10 in an image with CV(image) = 0.60, a sampling area of 400 x 400 pixels (11% of whole image) was required, whereas if CV(image) = 1.46, a sampling area of 1000 x 100 pixels (69% of whole image) became necessary. This suggests that a large-size dispersion of element sizes in an image requires increasingly larger sampling areas or a larger number of images.
The Indigo V Indian Ocean Expedition: a prototype for citizen microbial oceanography
NASA Astrophysics Data System (ADS)
Lauro, Federico; Senstius, Jacob; Cullen, Jay; Lauro, Rachelle; Neches, Russell; Grzymski, Joseph
2014-05-01
Microbial Oceanography has long been an extremely expensive discipline, requiring ship time for sample collection and thereby economically constraining the number of samples collected. This is especially true for under-sampled water bodies such as the Indian Ocean. Specialised scientific equipment only adds to the costs. Moreover, long term monitoring of microbial communities and large scale modelling of global biogeochemical cycles requires the collection of high-density data both temporally and spatially in a cost-effective way. Thousands of private ocean-going vessels are cruising around the world's oceans every day. We believe that a combination of new technologies, appropriate laboratory protocols and strategic operational partnerships will allow researchers to broaden the scope of participation in basic oceanographic research. This will be achieved by equipping sailing vessels with small, satcom-equipped sampling devices, user-friendly collection techniques and a 'pre-addressed-stamped-envelope' to send in the samples for analysis. We aim to prove that 'bigger' is not necessarily 'better' and the key to greater understanding of the world's oceans is to forge the way to easier and cheaper sample acquisition. The ultimate goal of the Indigo V Expedition is to create a working blue-print for 'citizen microbial oceanography'. We will present the preliminary outcomes of the first Indigo V expedition, from Capetown to Singapore, highlighting the challenges and opportunities of such endeavours.
Pavlou, Andrew T.; Ji, Wei; Brown, Forrest B.
2016-01-23
Here, a proper treatment of thermal neutron scattering requires accounting for chemical binding through a scattering law S(α,β,T). Monte Carlo codes sample the secondary neutron energy and angle after a thermal scattering event from probability tables generated from S(α,β,T) tables at discrete temperatures, requiring a large amount of data for multiscale and multiphysics problems with detailed temperature gradients. We have previously developed a method to handle this temperature dependence on-the-fly during the Monte Carlo random walk using polynomial expansions in 1/T to directly sample the secondary energy and angle. In this paper, the on-the-fly method is implemented into MCNP6 andmore » tested in both graphite-moderated and light water-moderated systems. The on-the-fly method is compared with the thermal ACE libraries that come standard with MCNP6, yielding good agreement with integral reactor quantities like k-eigenvalue and differential quantities like single-scatter secondary energy and angle distributions. The simulation runtimes are comparable between the two methods (on the order of 5–15% difference for the problems tested) and the on-the-fly fit coefficients only require 5–15 MB of total data storage.« less
Thermographic Imaging of Defects in Anisotropic Composites
NASA Technical Reports Server (NTRS)
Plotnikov, Y. A.; Winfree, W. P.
2000-01-01
Composite materials are of increasing interest to the aerospace industry as a result of their weight versus performance characteristics. One of the disadvantages of composites is the high cost of fabrication and post inspection with conventional ultrasonic scanning systems. The high cost of inspection is driven by the need for scanning systems which can follow large curve surfaces. Additionally, either large water tanks or water squirters are required to couple the ultrasonics into the part. Thermographic techniques offer significant advantages over conventional ultrasonics by not requiring physical coupling between the part and sensor. The thermographic system can easily inspect large curved surface without requiring a surface following scanner. However, implementation of Thermal Nondestructive Evaluations (TNDE) for flaw detection in composite materials and structures requires determining its limit. Advanced algorithms have been developed to enable locating and sizing defects in carbon fiber reinforced plastic (CFRP). Thermal Tomography is a very promising method for visualizing the size and location of defects in materials such as CFRP. However, further investigations are required to determine its capabilities for inspection of thick composites. In present work we have studied influence of the anisotropy on the reconstructed image of a defect generated by an inversion technique. The composite material is considered as homogeneous with macro properties: thermal conductivity K, specific heat c, and density rho. The simulation process involves two sequential steps: solving the three dimensional transient heat diffusion equation for a sample with a defect, then estimating the defect location and size from the surface spatial and temporal thermal distributions (inverse problem), calculated from the simulations.
Development of a real-time microchip PCR system for portable plant disease diagnosis.
Koo, Chiwan; Malapi-Wight, Martha; Kim, Hyun Soo; Cifci, Osman S; Vaughn-Diaz, Vanessa L; Ma, Bo; Kim, Sungman; Abdel-Raziq, Haron; Ong, Kevin; Jo, Young-Ki; Gross, Dennis C; Shim, Won-Bo; Han, Arum
2013-01-01
Rapid and accurate detection of plant pathogens in the field is crucial to prevent the proliferation of infected crops. Polymerase chain reaction (PCR) process is the most reliable and accepted method for plant pathogen diagnosis, however current conventional PCR machines are not portable and require additional post-processing steps to detect the amplified DNA (amplicon) of pathogens. Real-time PCR can directly quantify the amplicon during the DNA amplification without the need for post processing, thus more suitable for field operations, however still takes time and require large instruments that are costly and not portable. Microchip PCR systems have emerged in the past decade to miniaturize conventional PCR systems and to reduce operation time and cost. Real-time microchip PCR systems have also emerged, but unfortunately all reported portable real-time microchip PCR systems require various auxiliary instruments. Here we present a stand-alone real-time microchip PCR system composed of a PCR reaction chamber microchip with integrated thin-film heater, a compact fluorescence detector to detect amplified DNA, a microcontroller to control the entire thermocycling operation with data acquisition capability, and a battery. The entire system is 25 × 16 × 8 cm(3) in size and 843 g in weight. The disposable microchip requires only 8-µl sample volume and a single PCR run consumes 110 mAh of power. A DNA extraction protocol, notably without the use of liquid nitrogen, chemicals, and other large lab equipment, was developed for field operations. The developed real-time microchip PCR system and the DNA extraction protocol were used to successfully detect six different fungal and bacterial plant pathogens with 100% success rate to a detection limit of 5 ng/8 µl sample.
Development of a Real-Time Microchip PCR System for Portable Plant Disease Diagnosis
Kim, Hyun Soo; Cifci, Osman S.; Vaughn-Diaz, Vanessa L.; Ma, Bo; Kim, Sungman; Abdel-Raziq, Haron; Ong, Kevin; Jo, Young-Ki; Gross, Dennis C.; Shim, Won-Bo; Han, Arum
2013-01-01
Rapid and accurate detection of plant pathogens in the field is crucial to prevent the proliferation of infected crops. Polymerase chain reaction (PCR) process is the most reliable and accepted method for plant pathogen diagnosis, however current conventional PCR machines are not portable and require additional post-processing steps to detect the amplified DNA (amplicon) of pathogens. Real-time PCR can directly quantify the amplicon during the DNA amplification without the need for post processing, thus more suitable for field operations, however still takes time and require large instruments that are costly and not portable. Microchip PCR systems have emerged in the past decade to miniaturize conventional PCR systems and to reduce operation time and cost. Real-time microchip PCR systems have also emerged, but unfortunately all reported portable real-time microchip PCR systems require various auxiliary instruments. Here we present a stand-alone real-time microchip PCR system composed of a PCR reaction chamber microchip with integrated thin-film heater, a compact fluorescence detector to detect amplified DNA, a microcontroller to control the entire thermocycling operation with data acquisition capability, and a battery. The entire system is 25×16×8 cm3 in size and 843 g in weight. The disposable microchip requires only 8-µl sample volume and a single PCR run consumes 110 mAh of power. A DNA extraction protocol, notably without the use of liquid nitrogen, chemicals, and other large lab equipment, was developed for field operations. The developed real-time microchip PCR system and the DNA extraction protocol were used to successfully detect six different fungal and bacterial plant pathogens with 100% success rate to a detection limit of 5 ng/8 µl sample. PMID:24349341
Theory of using magnetic deflections to combine charged particle beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steckbeck, Mackenzie K.; Doyle, Barney Lee
2014-09-01
Several radiation effects projects in the Ion Beam Lab (IBL) have recently required two disparate charged particle beams to simultaneously strike a single sample through a single port of the target chamber. Because these beams have vastly different mass–energy products (MEP), the low-MEP beam requires a large angle of deflection toward the sample by a bending electromagnet. A second electromagnet located further upstream provides a means to compensate for the small angle deflection experienced by the high-MEP beam during its path through the bending magnet. This paper derives the equations used to select the magnetic fields required by these twomore » magnets to achieve uniting both beams at the target sample. A simple result was obtained when the separation of the two magnets was equivalent to the distance from the bending magnet to the sample, and the equation is given by: B s= 1/2(r c/r s) B c, where B s and B c are the magnetic fields in the steering and bending magnet and r c/r s is the ratio of the radii of the bending magnet to that of the steering magnet. This result is not dependent upon the parameters of the high MEP beam, i.e. energy, mass, charge state. Therefore, once the field of the bending magnet is set for the low-MEP beam, and the field in the steering magnet is set as indicted in the equation, the trajectory path of any high-MEP beam will be directed into the sample.« less
Cordero, Chiara; Kiefl, Johannes; Schieberle, Peter; Reichenbach, Stephen E; Bicchi, Carlo
2015-01-01
Modern omics disciplines dealing with food flavor focus the analytical efforts on the elucidation of sensory-active compounds, including all possible stimuli of multimodal perception (aroma, taste, texture, etc.) by means of a comprehensive, integrated treatment of sample constituents, such as physicochemical properties, concentration in the matrix, and sensory properties (odor/taste quality, perception threshold). Such analyses require detailed profiling of known bioactive components as well as advanced fingerprinting techniques to catalog sample constituents comprehensively, quantitatively, and comparably across samples. Multidimensional analytical platforms support comprehensive investigations required for flavor analysis by combining information on analytes' identities, physicochemical behaviors (volatility, polarity, partition coefficient, and solubility), concentration, and odor quality. Unlike other omics, flavor metabolomics and sensomics include the final output of the biological phenomenon (i.e., sensory perceptions) as an additional analytical dimension, which is specifically and exclusively triggered by the chemicals analyzed. However, advanced omics platforms, which are multidimensional by definition, pose challenging issues not only in terms of coupling with detection systems and sample preparation, but also in terms of data elaboration and processing. The large number of variables collected during each analytical run provides a high level of information, but requires appropriate strategies to exploit fully this potential. This review focuses on advances in comprehensive two-dimensional gas chromatography and analytical platforms combining two-dimensional gas chromatography with olfactometry, chemometrics, and quantitative assays for food sensory analysis to assess the quality of a given product. We review instrumental advances and couplings, automation in sample preparation, data elaboration, and a selection of applications.
Paper-based Devices for Isolation and Characterization of Extracellular Vesicles
Chen, Chihchen; Lin, Bo-Ren; Hsu, Min-Yen; Cheng, Chao-Min
2015-01-01
Extracellular vesicles (EVs), membranous particles released from various types of cells, hold a great potential for clinical applications. They contain nucleic acid and protein cargo and are increasingly recognized as a means of intercellular communication utilized by both eukaryote and prokaryote cells. However, due to their small size, current protocols for isolation of EVs are often time consuming, cumbersome, and require large sample volumes and expensive equipment, such as an ultracentrifuge. To address these limitations, we developed a paper-based immunoaffinity platform for separating subgroups of EVs that is easy, efficient, and requires sample volumes as low as 10 μl. Biological samples can be pipetted directly onto paper test zones that have been chemically modified with capture molecules that have high affinity to specific EV surface markers. We validate the assay by using scanning electron microscopy (SEM), paper-based enzyme-linked immunosorbent assays (P-ELISA), and transcriptome analysis. These paper-based devices will enable the study of EVs in the clinic and the research setting to help advance our understanding of EV functions in health and disease. PMID:25867034
Bayesian focalization: quantifying source localization with environmental uncertainty.
Dosso, Stan E; Wilmut, Michael J
2007-05-01
This paper applies a Bayesian formulation to study ocean acoustic source localization as a function of uncertainty in environmental properties (water column and seabed) and of data information content [signal-to-noise ratio (SNR) and number of frequencies]. The approach follows that of the optimum uncertain field processor [A. M. Richardson and L. W. Nolte, J. Acoust. Soc. Am. 89, 2280-2284 (1991)], in that localization uncertainty is quantified by joint marginal probability distributions for source range and depth integrated over uncertain environmental properties. The integration is carried out here using Metropolis Gibbs' sampling for environmental parameters and heat-bath Gibbs' sampling for source location to provide efficient sampling over complicated parameter spaces. The approach is applied to acoustic data from a shallow-water site in the Mediterranean Sea where previous geoacoustic studies have been carried out. It is found that reliable localization requires a sufficient combination of prior (environmental) information and data information. For example, sources can be localized reliably for single-frequency data at low SNR (-3 dB) only with small environmental uncertainties, whereas successful localization with large environmental uncertainties requires higher SNR and/or multifrequency data.
Chemical pump study for Pioneer Venus program
NASA Technical Reports Server (NTRS)
Rotheram, M.
1973-01-01
Two chemical pumps were designed for the Pioneer Venus large probe mass spectrometer. Factors involved in the design selection are reviewed. One pump is designed to process a sample of the Venus atmosphere to remove the major component, carbon dioxide, so that the minor, inert components may be measured with greater sensitivity. The other pump is designed to promote flow of atmospheric gas through a pressure reduction inlet system. This pump, located downstream from the mass spectrometer sampling point, provides the pressure differential required for flow through the inlet system. Both pumps utilize the reaction of carbon dioxide with lithium hydroxide. The available data for this reaction was reviewed with respect to the proposed applications, and certain deficiencies in reaction rate data at higher carbon dioxide pressures noted. The chemical pump designed for the inert gas experiment has an estimated volume of 30 cu cm and weight of 80 grams, exclusive of the four valves required for the operation. The chemical pump for the pressure reduction inlet system is designed for a total sample of 0.3 bar liter during the Venus descent.
Sampling Mars: Analytical requirements and work to do in advance
NASA Technical Reports Server (NTRS)
Koeberl, Christian
1988-01-01
Sending a mission to Mars to collect samples and return them to the Earth for analysis is without doubt one of the most exciting and important tasks for planetary science in the near future. Many scientifically important questions are associated with the knowledge of the composition and structure of Martian samples. Amongst the most exciting questions is the clarification of the SNC problem- to prove or disprove a possible Martian origin of these meteorites. Since SNC meteorites have been used to infer the chemistry of the planet Mars, and its evolution (including the accretion history), it would be important to know if the whole story is true. But before addressing possible scientific results, we have to deal with the analytical requirements, and with possible pre-return work. It is unlikely to expect that a possible Mars sample return mission will bring back anything close to the amount returned by the Apollo missions. It will be more like the amount returned by the Luna missions, or at least in that order of magnitude. This requires very careful sample selection, and very precise analytical techniques. These techniques should be able to use minimal sample sizes and on the other hand optimize the scientific output. The possibility to work with extremely small samples should not obstruct another problem: possible sampling errors. As we know from terrestrial geochemical studies, sampling procedures are quite complicated and elaborate to ensure avoiding sampling errors. The significance of analyzing a milligram or submilligram sized sample and putting that in relationship with the genesis of whole planetary crusts has to be viewed with care. This leaves a dilemma on one hand, to minimize the sample size as far as possible in order to have the possibility of returning as many different samples as possible, and on the other hand to take a sample large enough to be representative. Whole rock samples are very useful, but should not exceed the 20 to 50 g range, except in cases of extreme inhomogeneity, because for larger samples the information tends to become redundant. Soil samples should be in the 2 to 10 g range, permitting the splitting of the returned samples for studies in different laboratories with variety of techniques.
Mohammed, Yassene; Percy, Andrew J; Chambers, Andrew G; Borchers, Christoph H
2015-02-06
Multiplexed targeted quantitative proteomics typically utilizes multiple reaction monitoring and allows the optimized quantification of a large number of proteins. One challenge, however, is the large amount of data that needs to be reviewed, analyzed, and interpreted. Different vendors provide software for their instruments, which determine the recorded responses of the heavy and endogenous peptides and perform the response-curve integration. Bringing multiplexed data together and generating standard curves is often an off-line step accomplished, for example, with spreadsheet software. This can be laborious, as it requires determining the concentration levels that meet the required accuracy and precision criteria in an iterative process. We present here a computer program, Qualis-SIS, that generates standard curves from multiplexed MRM experiments and determines analyte concentrations in biological samples. Multiple level-removal algorithms and acceptance criteria for concentration levels are implemented. When used to apply the standard curve to new samples, the software flags each measurement according to its quality. From the user's perspective, the data processing is instantaneous due to the reactivity paradigm used, and the user can download the results of the stepwise calculations for further processing, if necessary. This allows for more consistent data analysis and can dramatically accelerate the downstream data analysis.
Code-division-multiplexed readout of large arrays of TES microcalorimeters
NASA Astrophysics Data System (ADS)
Morgan, K. M.; Alpert, B. K.; Bennett, D. A.; Denison, E. V.; Doriese, W. B.; Fowler, J. W.; Gard, J. D.; Hilton, G. C.; Irwin, K. D.; Joe, Y. I.; O'Neil, G. C.; Reintsema, C. D.; Schmidt, D. R.; Ullom, J. N.; Swetz, D. S.
2016-09-01
Code-division multiplexing (CDM) offers a path to reading out large arrays of transition edge sensor (TES) X-ray microcalorimeters with excellent energy and timing resolution. We demonstrate the readout of X-ray TESs with a 32-channel flux-summed code-division multiplexing circuit based on superconducting quantum interference device (SQUID) amplifiers. The best detector has energy resolution of 2.28 ± 0.12 eV FWHM at 5.9 keV and the array has mean energy resolution of 2.77 ± 0.02 eV over 30 working sensors. The readout channels are sampled sequentially at 160 ns/row, for an effective sampling rate of 5.12 μs/channel. The SQUID amplifiers have a measured flux noise of 0.17 μΦ0/√Hz (non-multiplexed, referred to the first stage SQUID). The multiplexed noise level and signal slew rate are sufficient to allow readout of more than 40 pixels per column, making CDM compatible with requirements outlined for future space missions. Additionally, because the modulated data from the 32 SQUID readout channels provide information on each X-ray event at the row rate, our CDM architecture allows determination of the arrival time of an X-ray event to within 275 ns FWHM with potential benefits in experiments that require detection of near-coincident events.
Evaluating markers for the early detection of cancer: overview of study designs and methods.
Baker, Stuart G; Kramer, Barnett S; McIntosh, Martin; Patterson, Blossom H; Shyr, Yu; Skates, Steven
2006-01-01
The field of cancer biomarker development has been evolving rapidly. New developments both in the biologic and statistical realms are providing increasing opportunities for evaluation of markers for both early detection and diagnosis of cancer. To review the major conceptual and methodological issues in cancer biomarker evaluation, with an emphasis on recent developments in statistical methods together with practical recommendations. We organized this review by type of study: preliminary performance, retrospective performance, prospective performance and cancer screening evaluation. For each type of study, we discuss methodologic issues, provide examples and discuss strengths and limitations. Preliminary performance studies are useful for quickly winnowing down the number of candidate markers; however their results may not apply to the ultimate target population, asymptomatic subjects. If stored specimens from cohort studies with clinical cancer endpoints are available, retrospective studies provide a quick and valid way to evaluate performance of the markers or changes in the markers prior to the onset of clinical symptoms. Prospective studies have a restricted role because they require large sample sizes, and, if the endpoint is cancer on biopsy, there may be bias due to overdiagnosis. Cancer screening studies require very large sample sizes and long follow-up, but are necessary for evaluating the marker as a trigger of early intervention.
Zhang, Xing; Romm, Michelle; Zheng, Xueyun; ...
2016-12-29
Characterization of endogenous metabolites and xenobiotics is essential to deconvoluting the genetic and environmental causes of disease. However, surveillance of chemical exposure and disease-related changes in large cohorts requires an analytical platform that offers rapid measurement, high sensitivity, efficient separation, broad dynamic range, and application to an expansive chemical space. Here in this article, we present a novel platform for small molecule analyses that addresses these requirements by combining solid-phase extraction with ion mobility spectrometry and mass spectrometry (SPE-IMS-MS). This platform is capable of performing both targeted and global measurements of endogenous metabolites and xenobiotics in human biofluids with highmore » reproducibility (CV ≤ 3%), sensitivity (LODs in the pM range in biofluids) and throughput (10-s sample-to-sample duty cycle). We report application of this platform to the analysis of human urine from patients with and without type 1 diabetes, where we observed statistically significant variations in the concentration of disaccharides and previously unreported chemical isomers. Lastly, this SPE-IMS-MS platform overcomes many of the current challenges of large-scale metabolomic and exposomic analyses and offers a viable option for population and patient cohort screening in an effort to gain insights into disease processes and human environmental chemical exposure.« less
Zhang, Xing; Romm, Michelle; Zheng, Xueyun; Zink, Erika M; Kim, Young-Mo; Burnum-Johnson, Kristin E; Orton, Daniel J; Apffel, Alex; Ibrahim, Yehia M; Monroe, Matthew E; Moore, Ronald J; Smith, Jordan N; Ma, Jian; Renslow, Ryan S; Thomas, Dennis G; Blackwell, Anne E; Swinford, Glenn; Sausen, John; Kurulugama, Ruwan T; Eno, Nathan; Darland, Ed; Stafford, George; Fjeldsted, John; Metz, Thomas O; Teeguarden, Justin G; Smith, Richard D; Baker, Erin S
2016-12-01
Characterization of endogenous metabolites and xenobiotics is essential to deconvoluting the genetic and environmental causes of disease. However, surveillance of chemical exposure and disease-related changes in large cohorts requires an analytical platform that offers rapid measurement, high sensitivity, efficient separation, broad dynamic range, and application to an expansive chemical space. Here, we present a novel platform for small molecule analyses that addresses these requirements by combining solid-phase extraction with ion mobility spectrometry and mass spectrometry (SPE-IMS-MS). This platform is capable of performing both targeted and global measurements of endogenous metabolites and xenobiotics in human biofluids with high reproducibility (CV 6 3%), sensitivity (LODs in the pM range in biofluids) and throughput (10-s sample-to-sample duty cycle). We report application of this platform to the analysis of human urine from patients with and without type 1 diabetes, where we observed statistically significant variations in the concentration of disaccharides and previously unreported chemical isomers. This SPE-IMS-MS platform overcomes many of the current challenges of large-scale metabolomic and exposomic analyses and offers a viable option for population and patient cohort screening in an effort to gain insights into disease processes and human environmental chemical exposure.
Zhang, Xing; Romm, Michelle; Zheng, Xueyun; Zink, Erika M.; Kim, Young-Mo; Burnum-Johnson, Kristin E.; Orton, Daniel J.; Apffel, Alex; Ibrahim, Yehia M.; Monroe, Matthew E.; Moore, Ronald J.; Smith, Jordan N.; Ma, Jian; Renslow, Ryan S.; Thomas, Dennis G.; Blackwell, Anne E.; Swinford, Glenn; Sausen, John; Kurulugama, Ruwan T.; Eno, Nathan; Darland, Ed; Stafford, George; Fjeldsted, John; Metz, Thomas O.; Teeguarden, Justin G.; Smith, Richard D.; Baker, Erin S.
2017-01-01
Characterization of endogenous metabolites and xenobiotics is essential to deconvoluting the genetic and environmental causes of disease. However, surveillance of chemical exposure and disease-related changes in large cohorts requires an analytical platform that offers rapid measurement, high sensitivity, efficient separation, broad dynamic range, and application to an expansive chemical space. Here, we present a novel platform for small molecule analyses that addresses these requirements by combining solid-phase extraction with ion mobility spectrometry and mass spectrometry (SPE-IMS-MS). This platform is capable of performing both targeted and global measurements of endogenous metabolites and xenobiotics in human biofluids with high reproducibility (CV 6 3%), sensitivity (LODs in the pM range in biofluids) and throughput (10-s sample-to-sample duty cycle). We report application of this platform to the analysis of human urine from patients with and without type 1 diabetes, where we observed statistically significant variations in the concentration of disaccharides and previously unreported chemical isomers. This SPE-IMS-MS platform overcomes many of the current challenges of large-scale metabolomic and exposomic analyses and offers a viable option for population and patient cohort screening in an effort to gain insights into disease processes and human environmental chemical exposure. PMID:29276770
Optimization of sampling pattern and the design of Fourier ptychographic illuminator.
Guo, Kaikai; Dong, Siyuan; Nanda, Pariksheet; Zheng, Guoan
2015-03-09
Fourier ptychography (FP) is a recently developed imaging approach that facilitates high-resolution imaging beyond the cutoff frequency of the employed optics. In the original FP approach, a periodic LED array is used for sample illumination, and therefore, the scanning pattern is a uniform grid in the Fourier space. Such a uniform sampling scheme leads to 3 major problems for FP, namely: 1) it requires a large number of raw images, 2) it introduces the raster grid artefacts in the reconstruction process, and 3) it requires a high-dynamic-range detector. Here, we investigate scanning sequences and sampling patterns to optimize the FP approach. For most biological samples, signal energy is concentrated at low-frequency region, and as such, we can perform non-uniform Fourier sampling in FP by considering the signal structure. In contrast, conventional ptychography perform uniform sampling over the entire real space. To implement the non-uniform Fourier sampling scheme in FP, we have designed and built an illuminator using LEDs mounted on a 3D-printed plastic case. The advantages of this illuminator are threefold in that: 1) it reduces the number of image acquisitions by at least 50% (68 raw images versus 137 in the original FP setup), 2) it departs from the translational symmetry of sampling to solve the raster grid artifact problem, and 3) it reduces the dynamic range of the captured images 6 fold. The results reported in this paper significantly shortened acquisition time and improved quality of FP reconstructions. It may provide new insights for developing Fourier ptychographic imaging platforms and find important applications in digital pathology.
Sampling procedures for inventory of commercial volume tree species in Amazon Forest.
Netto, Sylvio P; Pelissari, Allan L; Cysneiros, Vinicius C; Bonazza, Marcelo; Sanquetta, Carlos R
2017-01-01
The spatial distribution of tropical tree species can affect the consistency of the estimators in commercial forest inventories, therefore, appropriate sampling procedures are required to survey species with different spatial patterns in the Amazon Forest. For this, the present study aims to evaluate the conventional sampling procedures and introduce the adaptive cluster sampling for volumetric inventories of Amazonian tree species, considering the hypotheses that the density, the spatial distribution and the zero-plots affect the consistency of the estimators, and that the adaptive cluster sampling allows to obtain more accurate volumetric estimation. We use data from a census carried out in Jamari National Forest, Brazil, where trees with diameters equal to or higher than 40 cm were measured in 1,355 plots. Species with different spatial patterns were selected and sampled with simple random sampling, systematic sampling, linear cluster sampling and adaptive cluster sampling, whereby the accuracy of the volumetric estimation and presence of zero-plots were evaluated. The sampling procedures applied to species were affected by the low density of trees and the large number of zero-plots, wherein the adaptive clusters allowed concentrating the sampling effort in plots with trees and, thus, agglutinating more representative samples to estimate the commercial volume.
NASA Astrophysics Data System (ADS)
Yano, Hajime; McKay, Christopher P.; Anbar, Ariel; Tsou, Peter
The recent report of possible water vapor plumes at Europa and Ceres, together with the well-known Enceladus plume containing water vapor, salt, ammonia, and organic molecules, suggests that sample return missions could evolve into a generic approach for outer Solar System exploration in the near future, especially for the benefit of astrobiology research. Sampling such plumes can be accomplished via fly-through mission designs, modeled after the successful Stardust mission to capture and return material from Comet Wild-2 and multiple, precise trajectory controls of the Cassini mission to fly through Enceladus’ plume. The proposed LIFE (Life Investigation For Enceladus) mission to Enceladus, which would sample organic molecules from the plume of that apparently habitable world, provides one example of the appealing scientific return of such missions. Beyond plumes, the upper atmosphere of Titan could also be sampled in this manner. The SCIM mission to Mars, also inspired by Stardust, would sample and return aerosol dust in the upper atmosphere of Mars and thus extends this concept even to other planetary bodies. Such missions share common design needs. In particular, they require large exposed sampler areas (or sampler arrays) that can be contained to the standards called for by international planetary protection protocols that COSPAR Planetary Protection Policy (PPP) recommends. Containment is also needed because these missions are driven by astrobiologically relevant science - including interest in organic molecules - which argues against heat sterilization that could destroy scientific value of samples. Sample containment is a daunting engineering challenge. Containment systems must be carefully designed to appropriate levels to satisfy the two top requirements: planetary protection policy and the preserving the scientific value of samples. Planning for Mars sample return tends to center on a hermetic seal specification (i.e., gas-tight against helium escape). While this is an ideal specification, it far exceeds the current PPP requirements for Category-V “restricted Earth return”, which typically center on a probability of escape of a biologically active particle (e.g., < 1 in 10 (6) chance of escape of particles > 50 nm diameter). Particles of this size (orders of magnitude larger than a helium atom) are not volatile and generally “sticky” toward surfaces; the mobility of viruses and biomolecules requires aerosolization. Thus, meeting the planetary protection challenge does not require hermetic seal. So far, only a handful of robotic missions accomplished deep space sample returns, i.e., Genesis, Stardust and Hayabusa. This year, Hayabusa-2 will be launched and OSIRIS-REx will follow in a few years. All of these missions are classified as “unrestricted Earth return” by the COSPAR PPP recommendation. Nevertheless, scientific requirements of organic contamination control have been implemented to all WBS regarding sampling mechanism and Earth return capsule of Hayabusa-2. While Genesis, Stardust and OSIRIS-REx capsules “breathe” terrestrial air as they re-enter Earth’s atmosphere, temporal “air-tight” design was already achieved by the Hayabusa-1 sample container using a double O-ring seal, and that for the Hayabusa-2 will retain noble gas and other released gas from returned solid samples using metal seal technology. After return, these gases can be collected through a filtered needle interface without opening the entire container lid. This expertise can be extended to meeting planetary protection requirements from “restricted return” targets. There are still some areas requiring new innovations, especially to assure contingency robustness in every phase of a return mission. These must be achieved by meeting both PPP and scientific requirements during initial design and WBS of the integrated sampling system including the Earth return capsule. It is also important to note that international communities in planetary protection, sample return science, and deep space engineering must meet to enable this game-changing opportunity of Outer Solar System exploration.
Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs
NASA Astrophysics Data System (ADS)
Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.
2016-07-01
Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and number of points, varies with the abundance, size and distributional pattern of target biota. Therefore, we advocate either the incorporation of prior knowledge or the use of baseline surveys to establish key properties of intended target biota in the initial stages of monitoring programs.
Conversion of a Capture ELISA to a Luminex xMAP Assay using a Multiplex Antibody Screening Method
Baker, Harold N.; Murphy, Robin; Lopez, Erica; Garcia, Carlos
2012-01-01
The enzyme-linked immunosorbent assay (ELISA) has long been the primary tool for detection of analytes of interest in biological samples for both life science research and clinical diagnostics. However, ELISA has limitations. It is typically performed in a 96-well microplate, and the wells are coated with capture antibody, requiring a relatively large amount of sample to capture an antigen of interest . The large surface area of the wells and the hydrophobic binding of capture antibody can also lead to non-specific binding and increased background. Additionally, most ELISAs rely upon enzyme-mediated amplification of signal in order to achieve reasonable sensitivity. Such amplification is not always linear and can thus skew results. In the past 15 years, a new technology has emerged that offers the benefits of the ELISA, but also enables higher throughput, increased flexibility, reduced sample volume, and lower cost, with a similar workflow 1, 2. Luminex xMAP Technology is a microsphere (bead) array platform enabling both monoplex and multiplex assays that can be applied to both protein and nucleic acid applications 3-5. The beads have the capture antibody covalently immobilized on a smaller surface area, requiring less capture antibody and smaller sample volumes, compared to ELISA, and non-specific binding is significantly reduced. Smaller sample volumes are important when working with limiting samples such as cerebrospinal fluid, synovial fluid, etc. 6. Multiplexing the assay further reduces sample volume requirements, enabling multiple results from a single sample. Recent improvements by Luminex include: the new MAGPIX system, a smaller, less expensive, easier-to-use analyzer; Low-Concentration Magnetic MagPlex Microspheres which eliminate the need for expensive filter plates and come in a working concentration better suited for assay development and low-throughput applications; and the xMAP Antibody Coupling (AbC) Kit, which includes a protocol, reagents, and consumables necessary for coupling beads to the capture antibody of interest. (See Materials section for a detailed list of kit contents.) In this experiment, we convert a pre-optimized ELISA assay for TNF-alpha cytokine to the xMAP platform and compare the performance of the two methods 7-11. TNF-alpha is a biomarker used in the measurement of inflammatory responses in patients with autoimmune disorders. We begin by coupling four candidate capture antibodies to four different microsphere sets or regions. When mixed together, these four sets allow for the simultaneous testing of all four candidates with four separate detection antibodies to determine the best antibody pair, saving reagents, sample and time. Two xMAP assays are then constructed with the two most optimal antibody pairs and their performance is compared to that of the original ELISA assay in regards to signal strength, dynamic range, and sensitivity. PMID:22806215
Lee, Ju Yeon; Kim, Jin Young; Cheon, Mi Hee; Park, Gun Wook; Ahn, Yeong Hee; Moon, Myeong Hee; Yoo, Jong Shin
2014-02-26
A rapid, simple, and reproducible MRM-based validation method for serological glycoprotein biomarkers in clinical use was developed by targeting the nonglycosylated tryptic peptides adjacent to N-glycosylation sites. Since changes in protein glycosylation are known to be associated with a variety of diseases, glycoproteins have been major targets in biomarker discovery. We previously found that nonglycosylated tryptic peptides adjacent to N-glycosylation sites differed in concentration between normal and hepatocellular carcinoma (HCC) plasma due to differences in steric hindrance of the glycan moiety in N-glycoproteins to tryptic digestion (Lee et al., 2011). To increase the feasibility and applicability of clinical validation of biomarker candidates (nonglycosylated tryptic peptides), we developed a method to effectively monitor nonglycosylated tryptic peptides from a large number of plasma samples and to reduce the total analysis time with maximizing the effect of steric hindrance by the glycans during digestion of glycoproteins. The AUC values of targeted nonglycosylated tryptic peptides were excellent (0.955 for GQYCYELDEK, 0.880 for FEDGVLDPDYPR and 0.907 for TEDTIFLR), indicating that these could be effective biomarkers for hepatocellular carcinoma. This method provides the necessary throughput required to validate glycoprotein biomarkers, as well as quantitative accuracy for human plasma analysis, and should be amenable to clinical use. Difficulties in verifying and validating putative protein biomarkers are often caused by complex sample preparation procedures required to determine their concentrations in a large number of plasma samples. To solve the difficulties, we developed MRM-based protein biomarker assays that greatly reduce complex, time-consuming, and less reproducible sample pretreatment steps in plasma for clinical implementation. First, we used undepleted human plasma samples without any enrichment procedures. Using nanoLC/MS/MS, we targeted nonglycosylated tryptic peptides adjacent to N-linked glycosylation sites in N-linked glycoprotein biomarkers, which could be detected in human plasma samples without depleting highly abundant proteins. Second, human plasma proteins were digested with trypsin without reduction and alkylation procedures to minimize sample preparation. Third, trypsin digestion times were shortened so as to obtain reproducible results with maximization of the steric hindrance effect of the glycans during enzyme digestion. Finally, this rapid and simple sample preparation method was applied to validate targeted nonglycosylated tryptic peptides as liver cancer biomarker candidates for diagnosis in 40 normal and 41 hepatocellular carcinoma (HCC) human plasma samples. This strategy provided the necessary throughput required to monitor protein biomarkers, as well as quantitative accuracy in human plasma analysis. From biomarker discovery to clinical implementation, our method will provide a biomarker study platform that is suitable for clinical deployment, and can be applied to high-throughput approaches. Copyright © 2014 Elsevier B.V. All rights reserved.
Evaluation of ultra-low background materials for uranium and thorium using ICP-MS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoppe, E. W.; Overman, N. R.; LaFerriere, B. D.
2013-08-08
An increasing number of physics experiments require low background materials for their construction. The presence of Uranium and Thorium and their progeny in these materials present a variety of unwanted background sources for these experiments. The sensitivity of the experiments continues to drive the necessary levels of detection ever lower as well. This requirement for greater sensitivity has rendered direct radioassay impractical in many cases requiring large quantities of material, frequently many kilograms, and prolonged counting times, often months. Other assay techniques have been employed such as Neutron Activation Analysis but this requires access to expensive facilities and instrumentation andmore » can be further complicated and delayed by the formation of unwanted radionuclides. Inductively Coupled Plasma Mass Spectrometry (ICP-MS) is a useful tool and recent advancements have increased the sensitivity particularly in the elemental high mass range of U and Th. Unlike direct radioassay, ICP-MS is a destructive technique since it requires the sample to be in liquid form which is aspirated into a high temperature plasma. But it benefits in that it usually requires a very small sample, typically about a gram. This paper discusses how a variety of low background materials such as copper, polymers, and fused silica are made amenable to ICP-MS assay and how the arduous task of maintaining low backgrounds of U and Th is achieved.« less
Evaluation of Ultra-Low Background Materials for Uranium and Thorium Using ICP-MS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoppe, Eric W.; Overman, Nicole R.; LaFerriere, Brian D.
2013-08-08
An increasing number of physics experiments require low background materials for their construction. The presence of Uranium and Thorium and their progeny in these materials present a variety of unwanted background sources for these experiments. The sensitivity of the experiments continues to drive the necessary levels of detection ever lower as well. This requirement for greater sensitivity has rendered direct radioassay impractical in many cases requiring large quantities of material, frequently many kilograms, and prolonged counting times, often months. Other assay techniques have been employed such as Neutron Activation Analysis but this requires access to expensive facilities and instrumentation andmore » can be further complicated and delayed by the formation of unwanted radionuclides. Inductively Coupled Plasma Mass Spectrometry (ICP-MS) is a useful tool and recent advancements have increased the sensitivity particularly in the elemental high mass range of U and Th. Unlike direct radioassay, ICP-MS is a destructive technique since it requires the sample to be in liquid form which is aspirated into a high temperature plasma. But it benefits in that it usually requires a very small sample, typically about a gram. Here we will discuss how a variety of low background materials such as copper, polymers, and fused silica are made amenable to ICP-MS assay and how the arduous task of maintaining low backgrounds of U and Th is achieved.« less
DNA-encoded chemistry: enabling the deeper sampling of chemical space.
Goodnow, Robert A; Dumelin, Christoph E; Keefe, Anthony D
2017-02-01
DNA-encoded chemical library technologies are increasingly being adopted in drug discovery for hit and lead generation. DNA-encoded chemistry enables the exploration of chemical spaces four to five orders of magnitude more deeply than is achievable by traditional high-throughput screening methods. Operation of this technology requires developing a range of capabilities including aqueous synthetic chemistry, building block acquisition, oligonucleotide conjugation, large-scale molecular biological transformations, selection methodologies, PCR, sequencing, sequence data analysis and the analysis of large chemistry spaces. This Review provides an overview of the development and applications of DNA-encoded chemistry, highlighting the challenges and future directions for the use of this technology.
Spatio-temporal optimization of sampling for bluetongue vectors (Culicoides) near grazing livestock
2013-01-01
Background Estimating the abundance of Culicoides using light traps is influenced by a large variation in abundance in time and place. This study investigates the optimal trapping strategy to estimate the abundance or presence/absence of Culicoides on a field with grazing animals. We used 45 light traps to sample specimens from the Culicoides obsoletus species complex on a 14 hectare field during 16 nights in 2009. Findings The large number of traps and catch nights enabled us to simulate a series of samples consisting of different numbers of traps (1-15) on each night. We also varied the number of catch nights when simulating the sampling, and sampled with increasing minimum distances between traps. We used resampling to generate a distribution of different mean and median abundance in each sample. Finally, we used the hypergeometric distribution to estimate the probability of falsely detecting absence of vectors on the field. The variation in the estimated abundance decreased steeply when using up to six traps, and was less pronounced when using more traps, although no clear cutoff was found. Conclusions Despite spatial clustering in vector abundance, we found no effect of increasing the distance between traps. We found that 18 traps were generally required to reach 90% probability of a true positive catch when sampling just one night. But when sampling over two nights the same probability level was obtained with just three traps per night. The results are useful for the design of vector monitoring programmes on fields with grazing animals. PMID:23705770
Sample Identification at Scale - Implementing IGSN in a Research Agency
NASA Astrophysics Data System (ADS)
Klump, J. F.; Golodoniuc, P.; Wyborn, L. A.; Devaraju, A.; Fraser, R.
2015-12-01
Earth sciences are largely observational and rely on natural samples, types of which vary significantly between science disciplines. Sharing and referencing of samples in scientific literature and across the Web requires the use of globally unique identifiers essential for disambiguation. This practice is very common in other fields, e.g. ISBN in publishing, doi in scientific literature, etc. In Earth sciences however, this is still often done in an ad-hoc manner without the use of unique identifiers. The International Geo Sample Number (IGSN) system provides a persistent, globally unique label for identifying environmental samples. As an IGSN allocating agency, CSIRO implements the IGSN registration service at the organisational scale with contributions from multiple research groups. Capricorn Distal Footprints project is one of the first pioneers and early adopters of the technology in Australia. For this project, IGSN provides a mechanism for identification of new and legacy samples, as well as derived sub-samples. It will ensure transparency and reproducibility in various geochemical sampling campaigns that will involve a diversity of sampling methods. Hence, diverse geochemical and isotopic results can be linked back to the parent sample, particularly where multiple children of that sample have also been analysed. The IGSN integration for this project is still in early stages and requires further consultations on the governance mechanisms that we need to put in place to allow efficient collaboration within CSIRO and collaborating partners on the project including naming conventions, service interfaces, etc. In this work, we present the results of the initial implementation of IGSN in the context of the Capricorn Distal Footprints project. This study has so far demonstrated the effectiveness of the proposed approach, while maintaining the flexibility to adapt to various media types, which is critical in the context of a multi-disciplinary project.
Becker, Klaus; Hahn, Christian Markus; Saghafi, Saiedeh; Jährling, Nina; Wanis, Martina; Dodt, Hans-Ulrich
2014-01-01
Tissue clearing allows microscopy of large specimens as whole mouse brains or embryos. However, lipophilic tissue clearing agents as dibenzyl ether limit storage time of GFP-expressing samples to several days and do not prevent them from photobleaching during microscopy. To preserve GFP fluorescence, we developed a transparent solid resin formulation, which maintains the specimens' transparency and provides a constant signal to noise ratio even after hours of continuous laser irradiation. If required, high-power illumination or long exposure times can be applied with virtually no loss in signal quality and samples can be archived for years. PMID:25463047
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W; Khawaja, M. Sami; Rushton, Josh
Evaluating an energy efficiency program requires assessing the total energy and demand saved through all of the energy efficiency measures provided by the program. For large programs, the direct assessment of savings for each participant would be cost-prohibitive. Even if a program is small enough that a full census could be managed, such an undertaking would almost always be an inefficient use of evaluation resources. The bulk of this chapter describes methods for minimizing and quantifying sampling error. Measurement error and regression error are discussed in various contexts in other chapters.
Wu, Jianfeng; Wang, Yu; Li, Jianqing; Song, Aiguo
2016-01-01
For suppressing the crosstalk problem due to wire resistances and contacted resistances of the long flexible cables in tactile sensing systems, we present a novel two-wire fast readout approach for the two-dimensional resistive sensor array in shared row-column fashion. In the approach, two wires are used for every driving electrode and every sampling electrode in the resistive sensor array. The approach with a high readout rate, though it requires a large number of wires and many sampling channels, solves the cable crosstalk problem. We also verified the approach’s performance with Multisim simulations and actual experiments. PMID:27213373
Mapping autism risk loci using genetic linkage and chromosomal rearrangements
Szatmari, Peter; Paterson, Andrew; Zwaigenbaum, Lonnie; Roberts, Wendy; Brian, Jessica; Liu, Xiao-Qing; Vincent, John; Skaug, Jennifer; Thompson, Ann; Senman, Lili; Feuk, Lars; Qian, Cheng; Bryson, Susan; Jones, Marshall; Marshall, Christian; Scherer, Stephen; Vieland, Veronica; Bartlett, Christopher; Mangin, La Vonne; Goedken, Rhinda; Segre, Alberto; Pericak-Vance, Margaret; Cuccaro, Michael; Gilbert, John; Wright, Harry; Abramson, Ruth; Betancur, Catalina; Bourgeron, Thomas; Gillberg, Christopher; Leboyer, Marion; Buxbaum, Joseph; Davis, Kenneth; Hollander, Eric; Silverman, Jeremy; Hallmayer, Joachim; Lotspeich, Linda; Sutcliffe, James; Haines, Jonathan; Folstein, Susan; Piven, Joseph; Wassink, Thomas; Sheffield, Val; Geschwind, Daniel; Bucan, Maja; Brown, Ted; Cantor, Rita; Constantino, John; Gilliam, Conrad; Herbert, Martha; Lajonchere, Clara; Ledbetter, David; Lese-Martin, Christa; Miller, Janet; Nelson, Stan; Samango-Sprouse, Carol; Spence, Sarah; State, Matthew; Tanzi, Rudolph; Coon, Hilary; Dawson, Geraldine; Devlin, Bernie; Estes, Annette; Flodman, Pamela; Klei, Lambertus; Mcmahon, William; Minshew, Nancy; Munson, Jeff; Korvatska, Elena; Rodier, Patricia; Schellenberg, Gerard; Smith, Moyra; Spence, Anne; Stodgell, Chris; Tepper, Ping Guo; Wijsman, Ellen; Yu, Chang-En; Rogé, Bernadette; Mantoulan, Carine; Wittemeyer, Kerstin; Poustka, Annemarie; Felder, Bärbel; Klauck, Sabine; Schuster, Claudia; Poustka, Fritz; Bölte, Sven; Feineis-Matthews, Sabine; Herbrecht, Evelyn; Schmötzer, Gabi; Tsiantis, John; Papanikolaou, Katerina; Maestrini, Elena; Bacchelli, Elena; Blasi, Francesca; Carone, Simona; Toma, Claudio; Van Engeland, Herman; De Jonge, Maretha; Kemner, Chantal; Koop, Frederieke; Langemeijer, Marjolein; Hijmans, Channa; Staal, Wouter; Baird, Gillian; Bolton, Patrick; Rutter, Michael; Weisblatt, Emma; Green, Jonathan; Aldred, Catherine; Wilkinson, Julie-Anne; Pickles, Andrew; Le Couteur, Ann; Berney, Tom; Mcconachie, Helen; Bailey, Anthony; Francis, Kostas; Honeyman, Gemma; Hutchinson, Aislinn; Parr, Jeremy; Wallace, Simon; Monaco, Anthony; Barnby, Gabrielle; Kobayashi, Kazuhiro; Lamb, Janine; Sousa, Ines; Sykes, Nuala; Cook, Edwin; Guter, Stephen; Leventhal, Bennett; Salt, Jeff; Lord, Catherine; Corsello, Christina; Hus, Vanessa; Weeks, Daniel; Volkmar, Fred; Tauber, Maïté; Fombonne, Eric; Shih, Andy; Meyer, Kacie
2007-01-01
Autism spectrum disorders (ASD) are common, heritable neurodevelopmental conditions. The genetic architecture of ASD is complex, requiring large samples to overcome heterogeneity. Here we broaden coverage and sample size relative to other studies of ASD by using Affymetrix 10K single nucleotide polymorphism (SNP) arrays and 1168 families with ≥ 2 affected individuals to perform the largest linkage scan to date, while also analyzing copy number variation (CNV) in these families. Linkage and CNV analyses implicate chromosome 11p12-p13 and neurexins, respectively, amongst other candidate loci. Neurexins team with previously-implicated neuroligins for glutamatergic synaptogenesis, highlighting glutamate-related genes as promising candidates for ASD. PMID:17322880
Cox, Nick L J; Cami, Jan; Farhang, Amin; Smoker, Jonathan; Monreal-Ibero, Ana; Lallement, Rosine; Sarre, Peter J; Marshall, Charlotte C M; Smith, Keith T; Evans, Christopher J; Royer, Pierre; Linnartz, Harold; Cordiner, Martin A; Joblin, Christine; van Loon, Jacco Th; Foing, Bernard H; Bhatt, Neil H; Bron, Emeric; Elyajouri, Meriem; de Koter, Alex; Ehrenfreund, Pascale; Javadi, Atefeh; Kaper, Lex; Khosroshadi, Habib G; Laverick, Mike; Le Petit, Franck; Mulas, Giacomo; Roueff, Evelyne; Salama, Farid; Spaans, Marco
2017-10-01
The carriers of the diffuse interstellar bands (DIBs) are largely unidentified molecules ubiquitously present in the interstellar medium (ISM). After decades of study, two strong and possibly three weak near-infrared DIBs have recently been attributed to the [Formula: see text] fullerene based on observational and laboratory measurements. There is great promise for the identification of the over 400 other known DIBs, as this result could provide chemical hints towards other possible carriers. In an effort to systematically study the properties of the DIB carriers, we have initiated a new large-scale observational survey: the ESO Diffuse Interstellar Bands Large Exploration Survey (EDIBLES). The main objective is to build on and extend existing DIB surveys to make a major step forward in characterising the physical and chemical conditions for a statistically significant sample of interstellar lines-of-sight, with the goal to reverse-engineer key molecular properties of the DIB carriers. EDIBLES is a filler Large Programme using the Ultraviolet and Visual Echelle Spectrograph at the Very Large Telescope at Paranal, Chile. It is designed to provide an observationally unbiased view of the presence and behaviour of the DIBs towards early-spectral type stars whose lines-of-sight probe the diffuse-to-translucent ISM. Such a complete dataset will provide a deep census of the atomic and molecular content, physical conditions, chemical abundances and elemental depletion levels for each sightline. Achieving these goals requires a homogeneous set of high-quality data in terms of resolution ( R ~ 70 000 - 100 000), sensitivity (S/N up to 1000 per resolution element), and spectral coverage (305-1042 nm), as well as a large sample size (100+ sightlines). In this first paper the goals, objectives and methodology of the EDIBLES programme are described and an initial assessment of the data is provided.
NASA Astrophysics Data System (ADS)
Cox, Nick L. J.; Cami, Jan; Farhang, Amin; Smoker, Jonathan; Monreal-Ibero, Ana; Lallement, Rosine; Sarre, Peter J.; Marshall, Charlotte C. M.; Smith, Keith T.; Evans, Christopher J.; Royer, Pierre; Linnartz, Harold; Cordiner, Martin A.; Joblin, Christine; van Loon, Jacco Th.; Foing, Bernard H.; Bhatt, Neil H.; Bron, Emeric; Elyajouri, Meriem; de Koter, Alex; Ehrenfreund, Pascale; Javadi, Atefeh; Kaper, Lex; Khosroshadi, Habib G.; Laverick, Mike; Le Petit, Franck; Mulas, Giacomo; Roueff, Evelyne; Salama, Farid; Spaans, Marco
2017-10-01
The carriers of the diffuse interstellar bands (DIBs) are largely unidentified molecules ubiquitously present in the interstellar medium (ISM). After decades of study, two strong and possibly three weak near-infrared DIBs have recently been attributed to the C60^+ fullerene based on observational and laboratory measurements. There is great promise for the identification of the over 400 other known DIBs, as this result could provide chemical hints towards other possible carriers. In an effort tosystematically study the properties of the DIB carriers, we have initiated a new large-scale observational survey: the ESO Diffuse Interstellar Bands Large Exploration Survey (EDIBLES). The main objective is to build on and extend existing DIB surveys to make a major step forward in characterising the physical and chemical conditions for a statistically significant sample of interstellar lines-of-sight, with the goal to reverse-engineer key molecular properties of the DIB carriers. EDIBLES is a filler Large Programme using the Ultraviolet and Visual Echelle Spectrograph at the Very Large Telescope at Paranal, Chile. It is designed to provide an observationally unbiased view of the presence and behaviour of the DIBs towards early-spectral-type stars whose lines-of-sight probe the diffuse-to-translucent ISM. Such a complete dataset will provide a deep census of the atomic and molecular content, physical conditions, chemical abundances and elemental depletion levels for each sightline. Achieving these goals requires a homogeneous set of high-quality data in terms of resolution (R 70 000-100 000), sensitivity (S/N up to 1000 per resolution element), and spectral coverage (305-1042 nm), as well as a large sample size (100+ sightlines). In this first paper the goals, objectives and methodology of the EDIBLES programme are described and an initial assessment of the data is provided.
Cox, Nick L. J.; Cami, Jan; Farhang, Amin; Smoker, Jonathan; Monreal-Ibero, Ana; Lallement, Rosine; Sarre, Peter J.; Marshall, Charlotte C. M.; Smith, Keith T.; Evans, Christopher J.; Royer, Pierre; Linnartz, Harold; Cordiner, Martin A.; Joblin, Christine; van Loon, Jacco Th.; Foing, Bernard H.; Bhatt, Neil H.; Bron, Emeric; Elyajouri, Meriem; de Koter, Alex; Ehrenfreund, Pascale; Javadi, Atefeh; Kaper, Lex; Khosroshadi, Habib G.; Laverick, Mike; Le Petit, Franck; Mulas, Giacomo; Roueff, Evelyne; Salama, Farid; Spaans, Marco
2017-01-01
The carriers of the diffuse interstellar bands (DIBs) are largely unidentified molecules ubiquitously present in the interstellar medium (ISM). After decades of study, two strong and possibly three weak near-infrared DIBs have recently been attributed to the C60+ fullerene based on observational and laboratory measurements. There is great promise for the identification of the over 400 other known DIBs, as this result could provide chemical hints towards other possible carriers. In an effort to systematically study the properties of the DIB carriers, we have initiated a new large-scale observational survey: the ESO Diffuse Interstellar Bands Large Exploration Survey (EDIBLES). The main objective is to build on and extend existing DIB surveys to make a major step forward in characterising the physical and chemical conditions for a statistically significant sample of interstellar lines-of-sight, with the goal to reverse-engineer key molecular properties of the DIB carriers. EDIBLES is a filler Large Programme using the Ultraviolet and Visual Echelle Spectrograph at the Very Large Telescope at Paranal, Chile. It is designed to provide an observationally unbiased view of the presence and behaviour of the DIBs towards early-spectral type stars whose lines-of-sight probe the diffuse-to-translucent ISM. Such a complete dataset will provide a deep census of the atomic and molecular content, physical conditions, chemical abundances and elemental depletion levels for each sightline. Achieving these goals requires a homogeneous set of high-quality data in terms of resolution (R ~ 70 000 – 100 000), sensitivity (S/N up to 1000 per resolution element), and spectral coverage (305–1042 nm), as well as a large sample size (100+ sightlines). In this first paper the goals, objectives and methodology of the EDIBLES programme are described and an initial assessment of the data is provided. PMID:29151608
Kay, Richard G; Challis, Benjamin G; Casey, Ruth T; Roberts, Geoffrey P; Meek, Claire L; Reimann, Frank; Gribble, Fiona M
2018-06-01
Diagnosis of pancreatic neuroendocrine tumours requires the study of patient plasma with multiple immunoassays, using multiple aliquots of plasma. The application of mass spectrometry based techniques could reduce the cost and amount of plasma required for diagnosis. Plasma samples from two patients with pancreatic neuroendocrine tumours were extracted using an established acetonitrile based plasma peptide enrichment strategy. The circulating peptidome was characterised using nano and high flow rate LC/MS analyses. To assess the diagnostic potential of the analytical approach, a large sample batch (68 plasmas) from control subjects, and aliquots from subjects harbouring two different types of pancreatic neuroendocrine tumour (insulinoma and glucagonoma) were analysed using a 10-minute LC/MS peptide screen. The untargeted plasma peptidomics approach identified peptides derived from the glucagon prohormone, chromogranin A, chromogranin B and other peptide hormones and proteins related to control of peptide secretion. The glucagon prohormone derived peptides that were detected were compared against putative peptides that were identified using multiple antibody pairs against glucagon peptides. Comparison of the plasma samples for relative levels of selected peptides showed clear separation between the glucagonoma and the insulinoma and control samples. The combination of the organic solvent extraction methodology with high flow rate analysis could potentially be used to aid diagnosis and monitor treatment of patients with functioning pancreatic neuroendocrine tumours. However, significant validation will be required before this approach can be clinically applied. This article is protected by copyright. All rights reserved.
A double-observer method for reducing bias in faecal pellet surveys of forest ungulates
Jenkins, K.J.; Manly, B.F.J.
2008-01-01
1. Faecal surveys are used widely to study variations in abundance and distribution of forest-dwelling mammals when direct enumeration is not feasible. The utility of faecal indices of abundance is limited, however, by observational bias and variation in faecal disappearance rates that obscure their relationship to population size. We developed methods to reduce variability in faecal surveys and improve reliability of faecal indices. 2. We used double-observer transect sampling to estimate observational bias of faecal surveys of Roosevelt elk Cervus elaphus roosevelti and Columbian black-tailed deer Odocoileus hemionus columbianus in Olympic National Park, Washington, USA. We also modelled differences in counts of faecal groups obtained from paired cleared and uncleared transect segments as a means to adjust standing crop faecal counts for a standard accumulation interval and to reduce bias resulting from variable decay rates. 3. Estimated detection probabilities of faecal groups ranged from < 0.2-1.0 depending upon the observer, whether the faecal group was from elk or deer, faecal group size, distance of the faecal group from the sampling transect, ground vegetation cover, and the interaction between faecal group size and distance from the transect. 4. Models of plot-clearing effects indicated that standing crop counts of deer faecal groups required 34% reduction on flat terrain and 53% reduction on sloping terrain to represent faeces accumulated over a standard 100-day interval, whereas counts of elk faecal groups required 0% and 46% reductions on flat and sloping terrain, respectively. 5. Synthesis and applications. Double-observer transect sampling provides a cost-effective means of reducing observational bias and variation in faecal decay rates that obscure the interpretation of faecal indices of large mammal abundance. Given the variation we observed in observational bias of faecal surveys and persistence of faeces, we emphasize the need for future researchers to account for these comparatively manageable sources of bias before comparing faecal indices spatially or temporally. Double-observer sampling methods are readily adaptable to study variations in faecal indices of large mammals at the scale of the large forest reserve, natural area, or other forested regions when direct estimation of populations is problematic. ?? 2008 The Authors.
Pikkemaat, M G; Rapallini, M L B A; Karp, M T; Elferink, J W A
2010-08-01
Tetracyclines are extensively used in veterinary medicine. For the detection of tetracycline residues in animal products, a broad array of methods is available. Luminescent bacterial biosensors represent an attractive inexpensive, simple and fast method for screening large numbers of samples. A previously developed cell-biosensor method was subjected to an evaluation study using over 300 routine poultry samples and the results were compared with a microbial inhibition test. The cell-biosensor assay yielded many more suspect samples, 10.2% versus 2% with the inhibition test, which all could be confirmed by liquid chromatography-tandem mass spectrometry (LC-MS/MS). Only one sample contained a concentration above the maximum residue limit (MRL) of 100 microg kg(-1), while residue levels in most of the suspect samples were very low (<10 microg kg(-1)). The method appeared to be specific and robust. Using an experimental set-up comprising the analysis of a series of three sample dilutions allowed an appropriate cut-off for confirmatory analysis, limiting the number of samples and requiring further analysis to a minimum.
Wieding, Jan; Fritsche, Andreas; Heinl, Peter; Körner, Carolin; Cornelsen, Matthias; Seitz, Hermann; Mittelmeier, Wolfram; Bader, Rainer
2013-12-16
The repair of large segmental bone defects caused by fracture, tumor or infection remains challenging in orthopedic surgery. The capability of two different bone scaffold materials, sintered tricalciumphosphate and a titanium alloy (Ti6Al4V), were determined by mechanical and biomechanical testing. All scaffolds were fabricated by means of additive manufacturing techniques with identical design and controlled pore geometry. Small-sized sintered TCP scaffolds (10 mm diameter, 21 mm length) were fabricated as dense and open-porous samples and tested in an axial loading procedure. Material properties for titanium alloy were determined by using both tensile (dense) and compressive test samples (open-porous). Furthermore, large-sized open-porous TCP and titanium alloy scaffolds (30 mm in height and diameter, 700 µm pore size) were tested in a biomechanical setup simulating a large segmental bone defect using a composite femur stabilized with an osteosynthesis plate. Static physiologic loads (1.9 kN) were applied within these tests. Ultimate compressive strength of the TCP samples was 11.2 ± 0.7 MPa and 2.2 ± 0.3 MPa, respectively, for the dense and the open-porous samples. Tensile strength and ultimate compressive strength was 909.8 ± 4.9 MPa and 183.3 ± 3.7 MPa, respectively, for the dense and the open-porous titanium alloy samples. Furthermore, the biomechanical results showed good mechanical stability for the titanium alloy scaffolds. TCP scaffolds failed at 30% of the maximum load. Based on recent data, the 3D printed TCP scaffolds tested cannot currently be recommended for high load-bearing situations. Scaffolds made of titanium could be optimized by adapting the biomechanical requirements.
Filipiak, Wojciech; Filipiak, Anna; Ager, Clemens; Wiesenhofer, Helmut; Amann, Anton
2012-06-01
The approach for breath-VOCs' collection and preconcentration by applying needle traps was developed and optimized. The alveolar air was collected from only a few exhalations under visual control of expired CO(2) into a large gas-tight glass syringe and then warmed up to 45 °C for a short time to avoid condensation. Subsequently, a specially constructed sampling device equipped with Bronkhorst® electronic flow controllers was used for automated adsorption. This sampling device allows time-saving collection of expired/inspired air in parallel onto three different needle traps as well as improvement of sensitivity and reproducibility of NT-GC-MS analysis by collection of relatively large (up to 150 ml) volume of exhaled breath. It was shown that the collection of alveolar air derived from only a few exhalations into a large syringe followed by automated adsorption on needle traps yields better results than manual sorption by up/down cycles with a 1 ml syringe, mostly due to avoided condensation and electronically controlled stable sample flow rate. The optimal profile and composition of needle traps consists of 2 cm Carbopack X and 1 cm Carboxen 1000, allowing highly efficient VOCs' enrichment, while injection by a fast expansive flow technique requires no modifications in instrumentation and fully automated GC-MS analysis can be performed with a commercially available autosampler. This optimized analytical procedure considerably facilitates the collection and enrichment of alveolar air, and is therefore suitable for application at the bedside of critically ill patients in an intensive care unit. Due to its simplicity it can replace the time-consuming sampling of sufficient breath volume by numerous up/down cycles with a 1 ml syringe.
Jeong, Heon-Ho; Lee, Byungjin; Jin, Si Hyung; Jeong, Seong-Geun; Lee, Chang-Soo
2016-04-26
Droplet-based microfluidics enabling exquisite liquid-handling has been developed for diagnosis, drug discovery and quantitative biology. Compartmentalization of samples into a large number of tiny droplets is a great approach to perform multiplex assays and to improve reliability and accuracy using a limited volume of samples. Despite significant advances in microfluidic technology, individual droplet handling in pico-volume resolution is still a challenge in obtaining more efficient and varying multiplex assays. We present a highly addressable static droplet array (SDA) enabling individual digital manipulation of a single droplet using a microvalve system. In a conventional single-layer microvalve system, the number of microvalves required is dictated by the number of operation objects; thus, individual trap-and-release on a large-scale 2D array format is highly challenging. By integrating double-layer microvalves, we achieve a "balloon" valve that preserves the pressure-on state under released pressure; this valve can allow the selective releasing and trapping of 7200 multiplexed pico-droplets using only 1 μL of sample without volume loss. This selectivity and addressability completely arranged only single-cell encapsulated droplets from a mixture of droplet compositions via repetitive selective trapping and releasing. Thus, it will be useful for efficient handling of miniscule volumes of rare or clinical samples in multiplex or combinatory assays, and the selective collection of samples.
Outcome-Dependent Sampling with Interval-Censored Failure Time Data
Zhou, Qingning; Cai, Jianwen; Zhou, Haibo
2017-01-01
Summary Epidemiologic studies and disease prevention trials often seek to relate an exposure variable to a failure time that suffers from interval-censoring. When the failure rate is low and the time intervals are wide, a large cohort is often required so as to yield reliable precision on the exposure-failure-time relationship. However, large cohort studies with simple random sampling could be prohibitive for investigators with a limited budget, especially when the exposure variables are expensive to obtain. Alternative cost-effective sampling designs and inference procedures are therefore desirable. We propose an outcome-dependent sampling (ODS) design with interval-censored failure time data, where we enrich the observed sample by selectively including certain more informative failure subjects. We develop a novel sieve semiparametric maximum empirical likelihood approach for fitting the proportional hazards model to data from the proposed interval-censoring ODS design. This approach employs the empirical likelihood and sieve methods to deal with the infinite-dimensional nuisance parameters, which greatly reduces the dimensionality of the estimation problem and eases the computation difficulty. The consistency and asymptotic normality of the resulting regression parameter estimator are established. The results from our extensive simulation study show that the proposed design and method works well for practical situations and is more efficient than the alternative designs and competing approaches. An example from the Atherosclerosis Risk in Communities (ARIC) study is provided for illustration. PMID:28771664
Zhang, Qi; Yang, Xiong; Hu, Qinglei; Bai, Ke; Yin, Fangfang; Li, Ning; Gang, Yadong; Wang, Xiaojun; Zeng, Shaoqun
2017-01-01
To resolve fine structures of biological systems like neurons, it is required to realize microscopic imaging with sufficient spatial resolution in three dimensional systems. With regular optical imaging systems, high lateral resolution is accessible while high axial resolution is hard to achieve in a large volume. We introduce an imaging system for high 3D resolution fluorescence imaging of large volume tissues. Selective plane illumination was adopted to provide high axial resolution. A scientific CMOS working in sub-array mode kept the imaging area in the sample surface, which restrained the adverse effect of aberrations caused by inclined illumination. Plastic embedding and precise mechanical sectioning extended the axial range and eliminated distortion during the whole imaging process. The combination of these techniques enabled 3D high resolution imaging of large tissues. Fluorescent bead imaging showed resolutions of 0.59 μm, 0.47μm, and 0.59 μm in the x, y, and z directions, respectively. Data acquired from the volume sample of brain tissue demonstrated the applicability of this imaging system. Imaging of different depths showed uniform performance where details could be recognized in either the near-soma area or terminal area, and fine structures of neurons could be seen in both the xy and xz sections. PMID:29296503
Laboratory-based x-ray phase-contrast tomography enables 3D virtual histology
NASA Astrophysics Data System (ADS)
Töpperwien, Mareike; Krenkel, Martin; Quade, Felix; Salditt, Tim
2016-09-01
Due to the large penetration depth and small wavelength hard x-rays offer a unique potential for 3D biomedical and biological imaging, combining capabilities of high resolution and large sample volume. However, in classical absorption-based computed tomography, soft tissue only shows a weak contrast, limiting the actual resolution. With the advent of phase-contrast methods, the much stronger phase shift induced by the sample can now be exploited. For high resolution, free space propagation behind the sample is particularly well suited to make the phase shift visible. Contrast formation is based on the self-interference of the transmitted beam, resulting in object-induced intensity modulations in the detector plane. As this method requires a sufficiently high degree of spatial coherence, it was since long perceived as a synchrotron-based imaging technique. In this contribution we show that by combination of high brightness liquid-metal jet microfocus sources and suitable sample preparation techniques, as well as optimized geometry, detection and phase retrieval, excellent three-dimensional image quality can be obtained, revealing the anatomy of a cobweb spider in high detail. This opens up new opportunities for 3D virtual histology of small organisms. Importantly, the image quality is finally augmented to a level accessible to automatic 3D segmentation.
Wu, Chenglin; de Miranda, Noel Fcc; Chen, Longyun; Wasik, Agata M; Mansouri, Larry; Jurczak, Wojciech; Galazka, Krystyna; Dlugosz-Danecka, Monika; Machaczka, Maciej; Zhang, Huilai; Peng, Roujun; Morin, Ryan D; Rosenquist, Richard; Sander, Birgitta; Pan-Hammarström, Qiang
2016-06-21
The genetic mechanisms underlying disease progression, relapse and therapy resistance in mantle cell lymphoma (MCL) remain largely unknown. Whole-exome sequencing was performed in 27 MCL samples from 13 patients, representing the largest analyzed series of consecutive biopsies obtained at diagnosis and/or relapse for this type of lymphoma. Eighteen genes were found to be recurrently mutated in these samples, including known (ATM, MEF2B and MLL2) and novel mutation targets (S1PR1 and CARD11). CARD11, a scaffold protein required for B-cell receptor (BCR)-induced NF-κB activation, was subsequently screened in an additional 173 MCL samples and mutations were observed in 5.5% of cases. Based on in vitro cell line-based experiments, overexpression of CARD11 mutants were demonstrated to confer resistance to the BCR-inhibitor ibrutinib and NF-κB-inhibitor lenalidomide. Genetic alterations acquired in the relapse samples were found to be largely non-recurrent, in line with the branched evolutionary pattern of clonal evolution observed in most cases. In summary, this study highlights the genetic heterogeneity in MCL, in particular at relapse, and provides for the first time genetic evidence of BCR/NF-κB activation in a subset of MCL.
WATER QUALITY MONITORING OF PHARMACEUTICALS ...
The demand on freshwater to sustain the needs of the growing population is of worldwide concern. Often this water is used, treated, and released for reuse by other communities. The anthropogenic contaminants present in this water may include complex mixtures of pesticides, prescription and nonprescription drugs, personal care and common consumer products, industrial and domestic-use materials and degradation products of these compounds. Although, the fate of these pharmaceuticals and personal care products (PPCPs) in wastewater treatment facilities is largely unknown, the limited data that does exist suggests that many of these chemicals survive treatment and some others are returned to their biologically active form via deconjugation of metabolites.Traditional water sampling methods (i.e., grab or composite samples) often require the concentration of large amounts of water to detect trace levels of PPCPs. A passive sampler, the polar organic chemical integrative sampler (POCIS), has been developed to integratively concentrate the trace levels of these chemicals, determine the time-weighted average water concentrations, and provide a method of estimating the potential exposure of aquatic organisms to these complex mixtures of waterborne contaminants. The POCIS (U.S. Patent number 6,478,961) consists of a hydrophilic microporous membrane, acting as a semipermeable barrier, enveloping various solid-phase sorbents that retain the sampled chemicals. Sampling rates f
Response Variability in Commercial MOSFET SEE Qualification
George, J. S.; Clymer, D. A.; Turflinger, T. L.; ...
2016-12-01
Single-event effects (SEE) evaluation of five different part types of next generation, commercial trench MOSFETs indicates large part-to-part variation in determining a safe operating area (SOA) for drain-source voltage (V DS) following a test campaign that exposed >50 samples per part type to heavy ions. These results suggest a determination of a SOA using small sample sizes may fail to capture the full extent of the part-to-part variability. An example method is discussed for establishing a Safe Operating Area using a one-sided statistical tolerance limit based on the number of test samples. Finally, burn-in is shown to be a criticalmore » factor in reducing part-to-part variation in part response. Implications for radiation qualification requirements are also explored.« less
Enhanced Conformational Sampling Using Replica Exchange with Collective-Variable Tempering
2015-01-01
The computational study of conformational transitions in RNA and proteins with atomistic molecular dynamics often requires suitable enhanced sampling techniques. We here introduce a novel method where concurrent metadynamics are integrated in a Hamiltonian replica-exchange scheme. The ladder of replicas is built with different strengths of the bias potential exploiting the tunability of well-tempered metadynamics. Using this method, free-energy barriers of individual collective variables are significantly reduced compared with simple force-field scaling. The introduced methodology is flexible and allows adaptive bias potentials to be self-consistently constructed for a large number of simple collective variables, such as distances and dihedral angles. The method is tested on alanine dipeptide and applied to the difficult problem of conformational sampling in a tetranucleotide. PMID:25838811
Response Variability in Commercial MOSFET SEE Qualification
DOE Office of Scientific and Technical Information (OSTI.GOV)
George, J. S.; Clymer, D. A.; Turflinger, T. L.
Single-event effects (SEE) evaluation of five different part types of next generation, commercial trench MOSFETs indicates large part-to-part variation in determining a safe operating area (SOA) for drain-source voltage (V DS) following a test campaign that exposed >50 samples per part type to heavy ions. These results suggest a determination of a SOA using small sample sizes may fail to capture the full extent of the part-to-part variability. An example method is discussed for establishing a Safe Operating Area using a one-sided statistical tolerance limit based on the number of test samples. Finally, burn-in is shown to be a criticalmore » factor in reducing part-to-part variation in part response. Implications for radiation qualification requirements are also explored.« less
Zhang, Baile; Gao, Lihong; Xie, Yingshuang; Zhou, Wei; Chen, Xiaofeng; Lei, Chunni; Zhang, Huan
2017-07-08
A direct analysis in real time tandem mass spectrometry (DART-MS/MS) method was established for quickly screening five illegally added alkaloids of poppy shell from the hot pot condiment, beef noodle soup and seasoning. The samples were extracted and purified by acetonitrile, and then injected under the conditions of ionization temperature of 300℃, grid electrode voltage of 150 V and sampling rate of 0.8 mm/s using DART in the positive ion mode. The determination was conducted by tandem mass spectrometry in positive ESI mode under multiple reaction monitoring (MRM) mode. The method is simple and rapid, and can meet the requirement of rapid screening and analysis of large quantities of samples.
A Hybrid Semi-supervised Classification Scheme for Mining Multisource Geospatial Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vatsavai, Raju; Bhaduri, Budhendra L
2011-01-01
Supervised learning methods such as Maximum Likelihood (ML) are often used in land cover (thematic) classification of remote sensing imagery. ML classifier relies exclusively on spectral characteristics of thematic classes whose statistical distributions (class conditional probability densities) are often overlapping. The spectral response distributions of thematic classes are dependent on many factors including elevation, soil types, and ecological zones. A second problem with statistical classifiers is the requirement of large number of accurate training samples (10 to 30 |dimensions|), which are often costly and time consuming to acquire over large geographic regions. With the increasing availability of geospatial databases, itmore » is possible to exploit the knowledge derived from these ancillary datasets to improve classification accuracies even when the class distributions are highly overlapping. Likewise newer semi-supervised techniques can be adopted to improve the parameter estimates of statistical model by utilizing a large number of easily available unlabeled training samples. Unfortunately there is no convenient multivariate statistical model that can be employed for mulitsource geospatial databases. In this paper we present a hybrid semi-supervised learning algorithm that effectively exploits freely available unlabeled training samples from multispectral remote sensing images and also incorporates ancillary geospatial databases. We have conducted several experiments on real datasets, and our new hybrid approach shows over 25 to 35% improvement in overall classification accuracy over conventional classification schemes.« less
NASA Astrophysics Data System (ADS)
Ferrari, Ulisse
A maximal entropy model provides the least constrained probability distribution that reproduces experimental averages of an observables set. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a ``rectified'' Data-Driven algorithm that is fast and by sampling from the parameters posterior avoids both under- and over-fitting along all the directions of the parameters space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method. This research was supported by a Grant from the Human Brain Project (HBP CLAP).
Fernando, Rohan L; Cheng, Hao; Golden, Bruce L; Garrick, Dorian J
2016-12-08
Two types of models have been used for single-step genomic prediction and genome-wide association studies that include phenotypes from both genotyped animals and their non-genotyped relatives. The two types are breeding value models (BVM) that fit breeding values explicitly and marker effects models (MEM) that express the breeding values in terms of the effects of observed or imputed genotypes. MEM can accommodate a wider class of analyses, including variable selection or mixture model analyses. The order of the equations that need to be solved and the inverses required in their construction vary widely, and thus the computational effort required depends upon the size of the pedigree, the number of genotyped animals and the number of loci. We present computational strategies to avoid storing large, dense blocks of the MME that involve imputed genotypes. Furthermore, we present a hybrid model that fits a MEM for animals with observed genotypes and a BVM for those without genotypes. The hybrid model is computationally attractive for pedigree files containing millions of animals with a large proportion of those being genotyped. We demonstrate the practicality on both the original MEM and the hybrid model using real data with 6,179,960 animals in the pedigree with 4,934,101 phenotypes and 31,453 animals genotyped at 40,214 informative loci. To complete a single-trait analysis on a desk-top computer with four graphics cards required about 3 h using the hybrid model to obtain both preconditioned conjugate gradient solutions and 42,000 Markov chain Monte-Carlo (MCMC) samples of breeding values, which allowed making inferences from posterior means, variances and covariances. The MCMC sampling required one quarter of the effort when the hybrid model was used compared to the published MEM. We present a hybrid model that fits a MEM for animals with genotypes and a BVM for those without genotypes. Its practicality and considerable reduction in computing effort was demonstrated. This model can readily be extended to accommodate multiple traits, multiple breeds, maternal effects, and additional random effects such as polygenic residual effects.
Surinova, Silvia; Hüttenhain, Ruth; Chang, Ching-Yun; Espona, Lucia; Vitek, Olga; Aebersold, Ruedi
2013-08-01
Targeted proteomics based on selected reaction monitoring (SRM) mass spectrometry is commonly used for accurate and reproducible quantification of protein analytes in complex biological mixtures. Strictly hypothesis-driven, SRM assays quantify each targeted protein by collecting measurements on its peptide fragment ions, called transitions. To achieve sensitive and accurate quantitative results, experimental design and data analysis must consistently account for the variability of the quantified transitions. This consistency is especially important in large experiments, which increasingly require profiling up to hundreds of proteins over hundreds of samples. Here we describe a robust and automated workflow for the analysis of large quantitative SRM data sets that integrates data processing, statistical protein identification and quantification, and dissemination of the results. The integrated workflow combines three software tools: mProphet for peptide identification via probabilistic scoring; SRMstats for protein significance analysis with linear mixed-effect models; and PASSEL, a public repository for storage, retrieval and query of SRM data. The input requirements for the protocol are files with SRM traces in mzXML format, and a file with a list of transitions in a text tab-separated format. The protocol is especially suited for data with heavy isotope-labeled peptide internal standards. We demonstrate the protocol on a clinical data set in which the abundances of 35 biomarker candidates were profiled in 83 blood plasma samples of subjects with ovarian cancer or benign ovarian tumors. The time frame to realize the protocol is 1-2 weeks, depending on the number of replicates used in the experiment.
Identifying Etiological Agents Causing Diarrhea in Low Income Ecuadorian Communities
Vasco, Gabriela; Trueba, Gabriel; Atherton, Richard; Calvopiña, Manuel; Cevallos, William; Andrade, Thamara; Eguiguren, Martha; Eisenberg, Joseph N. S.
2014-01-01
Continued success in decreasing diarrheal disease burden requires targeted interventions. To develop such interventions, it is crucial to understand which pathogens cause diarrhea. Using a case-control design we tested stool samples, collected in both rural and urban Ecuador, for 15 pathogenic microorganisms. Pathogens were present in 51% of case and 27% of control samples from the urban community, and 62% of case and 18% of control samples collected from the rural community. Rotavirus and Shigellae were associated with diarrhea in the urban community; co-infections were more pathogenic than single infection; Campylobacter and Entamoeba histolytica were found in large numbers in cases and controls; and non-typhi Salmonella and enteropathogenic Escherichia coli were not found in any samples. Consistent with the Global Enteric Multicenter Study, focused in south Asia and sub-Saharan Africa, we found that in Ecuador a small group of pathogens accounted for a significant amount of the diarrheal disease burden. PMID:25048373
A Ground Truthing Method for AVIRIS Overflights Using Canopy Absorption Spectra
NASA Technical Reports Server (NTRS)
Gamon, John A.; Serrano, Lydia; Roberts, Dar A.; Ustin, Susan L.
1996-01-01
Remote sensing for ecological field studies requires ground truthing for accurate interpretation of remote imagery. However, traditional vegetation sampling methods are time consuming and hard to relate to the scale of an AVIRIS scene. The large errors associated with manual field sampling, the contrasting formats of remote and ground data, and problems with coregistration of field sites with AVIRIS pixels can lead to difficulties in interpreting AVIRIS data. As part of a larger study of fire risk in the Santa Monica Mountains of southern California, we explored a ground-based optical method of sampling vegetation using spectrometers mounted both above and below vegetation canopies. The goal was to use optical methods to provide a rapid, consistent, and objective means of "ground truthing" that could be related both to AVIRIS imagery and to conventional ground sampling (e.g., plot harvests and pigment assays).
A posteriori noise estimation in variable data sets. With applications to spectra and light curves
NASA Astrophysics Data System (ADS)
Czesla, S.; Molle, T.; Schmitt, J. H. M. M.
2018-01-01
Most physical data sets contain a stochastic contribution produced by measurement noise or other random sources along with the signal. Usually, neither the signal nor the noise are accurately known prior to the measurement so that both have to be estimated a posteriori. We have studied a procedure to estimate the standard deviation of the stochastic contribution assuming normality and independence, requiring a sufficiently well-sampled data set to yield reliable results. This procedure is based on estimating the standard deviation in a sample of weighted sums of arbitrarily sampled data points and is identical to the so-called DER_SNR algorithm for specific parameter settings. To demonstrate the applicability of our procedure, we present applications to synthetic data, high-resolution spectra, and a large sample of space-based light curves and, finally, give guidelines to apply the procedure in situation not explicitly considered here to promote its adoption in data analysis.
Richard, David; Speck, Thomas
2018-03-28
We investigate the kinetics and the free energy landscape of the crystallization of hard spheres from a supersaturated metastable liquid though direct simulations and forward flux sampling. In this first paper, we describe and test two different ways to reconstruct the free energy barriers from the sampled steady state probability distribution of cluster sizes without sampling the equilibrium distribution. The first method is based on mean first passage times, and the second method is based on splitting probabilities. We verify both methods for a single particle moving in a double-well potential. For the nucleation of hard spheres, these methods allow us to probe a wide range of supersaturations and to reconstruct the kinetics and the free energy landscape from the same simulation. Results are consistent with the scaling predicted by classical nucleation theory although a quantitative fit requires a rather large effective interfacial tension.
NASA Astrophysics Data System (ADS)
Richard, David; Speck, Thomas
2018-03-01
We investigate the kinetics and the free energy landscape of the crystallization of hard spheres from a supersaturated metastable liquid though direct simulations and forward flux sampling. In this first paper, we describe and test two different ways to reconstruct the free energy barriers from the sampled steady state probability distribution of cluster sizes without sampling the equilibrium distribution. The first method is based on mean first passage times, and the second method is based on splitting probabilities. We verify both methods for a single particle moving in a double-well potential. For the nucleation of hard spheres, these methods allow us to probe a wide range of supersaturations and to reconstruct the kinetics and the free energy landscape from the same simulation. Results are consistent with the scaling predicted by classical nucleation theory although a quantitative fit requires a rather large effective interfacial tension.
Possession experiences in dissociative identity disorder: a preliminary study.
Ross, Colin A
2011-01-01
Dissociative trance disorder, which includes possession experiences, was introduced as a provisional diagnosis requiring further study in the Diagnostic and Statistical Manual of Mental Disorders (4th ed.). Consideration is now being given to including possession experiences within dissociative identity disorder (DID) in the Diagnostic and Statistical Manual of Mental Disorders (5th ed.), which is due to be published in 2013. In order to provide empirical data relevant to the relationship between DID and possession states, I analyzed data on the prevalence of trance, possession states, sleepwalking, and paranormal experiences in 3 large samples: patients with DID from North America; psychiatric outpatients from Shanghai, China; and a general population sample from Winnipeg, Canada. Trance, sleepwalking, paranormal, and possession experiences were much more common in the DID patients than in the 2 comparison samples. The study is preliminary and exploratory in nature because the samples were not matched in any way.
An experimental study on dynamic response for MICP strengthening liquefiable sands
NASA Astrophysics Data System (ADS)
Han, Zhiguang; Cheng, Xiaohui; Ma, Qiang
2016-12-01
The technology of bio-grouting is a new technique for soft ground improvement. Many researchers have carried out a large number of experiments and study on this topic. However, few studies have been carried out on the dynamic response of solidified sand samples, such reducing liquefaction in sand. To study this characteristic of microbial-strengthened liquefiable sandy foundation, a microorganism formula and grouting scheme is applied. After grouting, the solidified samples are tested via dynamic triaxial testing to examine the cyclic performance of solidified sand samples. The results indicate that the solidified sand samples with various strengths can be obtained to meet different engineering requirements, the use of bacteria solution and nutritive salt is reduced, and solidified time is shortened to 1-2 days. Most importantly, in the study of the dynamic response, it is found that the MICP grouting scheme is effective in improving liquefiable sand characteristic, such as liquefaction resistance.
Hatemi, Peter K.; Medland, Sarah E.; Klemmensen, Robert; Oskarrson, Sven; Littvay, Levente; Dawes, Chris; Verhulst, Brad; McDermott, Rose; Nørgaard, Asbjørn Sonne; Klofstad, Casey; Christensen, Kaare; Johannesson, Magnus; Magnusson, Patrik K.E.; Eaves, Lindon J.; Martin, Nicholas G.
2014-01-01
Almost forty years ago, evidence from large studies of adult twins and their relatives suggested that between 30-60% of the variance in social and political attitudes could be explained by genetic influences. However, these findings have not been widely accepted or incorporated into the dominant paradigms that explain the etiology of political ideology. This has been attributed in part to measurement and sample limitations, as well the relative absence of molecular genetic studies. Here we present results from original analyses of a combined sample of over 12,000 twins pairs, ascertained from nine different studies conducted in five democracies, sampled over the course of four decades. We provide evidence that genetic factors play a role in the formation of political ideology, regardless of how ideology is measured, the era, or the population sampled. The only exception is a question that explicitly uses the phrase “Left-Right”. We then present results from one of the first genome-wide association studies on political ideology using data from three samples: a 1990 Australian sample involving 6,894 individuals from 3,516 families; a 2008 Australian sample of 1,160 related individuals from 635 families and a 2010 Swedish sample involving 3,334 individuals from 2,607 families. No polymorphisms reached genome-wide significance in the meta-analysis. The combined evidence suggests that political ideology constitutes a fundamental aspect of one’s genetically informed psychological disposition, but as Fisher proposed long ago, genetic influences on complex traits will be composed of thousands of markers of very small effects and it will require extremely large samples to have enough power in order to identify specific polymorphisms related to complex social traits. PMID:24569950
Hatemi, Peter K; Medland, Sarah E; Klemmensen, Robert; Oskarsson, Sven; Littvay, Levente; Dawes, Christopher T; Verhulst, Brad; McDermott, Rose; Nørgaard, Asbjørn Sonne; Klofstad, Casey A; Christensen, Kaare; Johannesson, Magnus; Magnusson, Patrik K E; Eaves, Lindon J; Martin, Nicholas G
2014-05-01
Almost 40 years ago, evidence from large studies of adult twins and their relatives suggested that between 30 and 60% of the variance in social and political attitudes could be explained by genetic influences. However, these findings have not been widely accepted or incorporated into the dominant paradigms that explain the etiology of political ideology. This has been attributed in part to measurement and sample limitations, as well the relative absence of molecular genetic studies. Here we present results from original analyses of a combined sample of over 12,000 twins pairs, ascertained from nine different studies conducted in five democracies, sampled over the course of four decades. We provide evidence that genetic factors play a role in the formation of political ideology, regardless of how ideology is measured, the era, or the population sampled. The only exception is a question that explicitly uses the phrase "Left-Right". We then present results from one of the first genome-wide association studies on political ideology using data from three samples: a 1990 Australian sample involving 6,894 individuals from 3,516 families; a 2008 Australian sample of 1,160 related individuals from 635 families and a 2010 Swedish sample involving 3,334 individuals from 2,607 families. No polymorphisms reached genome-wide significance in the meta-analysis. The combined evidence suggests that political ideology constitutes a fundamental aspect of one's genetically informed psychological disposition, but as Fisher proposed long ago, genetic influences on complex traits will be composed of thousands of markers of very small effects and it will require extremely large samples to have enough power in order to identify specific polymorphisms related to complex social traits.
Phylogenomic evidence for ancient hybridization in the genomes of living cats (Felidae)
Li, Gang; Davis, Brian W.; Eizirik, Eduardo; Murphy, William J.
2016-01-01
Inter-species hybridization has been recently recognized as potentially common in wild animals, but the extent to which it shapes modern genomes is still poorly understood. Distinguishing historical hybridization events from other processes leading to phylogenetic discordance among different markers requires a well-resolved species tree that considers all modes of inheritance and overcomes systematic problems due to rapid lineage diversification by sampling large genomic character sets. Here, we assessed genome-wide phylogenetic variation across a diverse mammalian family, Felidae (cats). We combined genotypes from a genome-wide SNP array with additional autosomal, X- and Y-linked variants to sample ∼150 kb of nuclear sequence, in addition to complete mitochondrial genomes generated using light-coverage Illumina sequencing. We present the first robust felid time tree that accounts for unique maternal, paternal, and biparental evolutionary histories. Signatures of phylogenetic discordance were abundant in the genomes of modern cats, in many cases indicating hybridization as the most likely cause. Comparison of big cat whole-genome sequences revealed a substantial reduction of X-linked divergence times across several large recombination cold spots, which were highly enriched for signatures of selection-driven post-divergence hybridization between the ancestors of the snow leopard and lion lineages. These results highlight the mosaic origin of modern felid genomes and the influence of sex chromosomes and sex-biased dispersal in post-speciation gene flow. A complete resolution of the tree of life will require comprehensive genomic sampling of biparental and sex-limited genetic variation to identify and control for phylogenetic conflict caused by ancient admixture and sex-biased differences in genomic transmission. PMID:26518481
NASA Astrophysics Data System (ADS)
Bradac, Marusa; Coe, Dan; Huang, Kuang-Han; Salmon, Brett; Hoag, Austin; Bradley, Larry; Ryan, Russell; Dawson, Will; Zitrin, Adi; Jones, Christine; Sharon, Keren; Trenti, Michele; Stark, Daniel; Bouwens, Rychard; Oesch, Pascal; Lam, Daniel; Carrasco Nunez, Daniela Patricia
2017-04-01
When did galaxies start forming stars? What is the role of distant galaxies in galaxy formation models and epoch of reionization? Recent observations indicate at least two critical puzzles in these studies. (1) First galaxies might have started forming stars earlier than previously thought (<400Myr after the Big Bang). (2) It is still unclear what is their star formation history and whether these galaxies can reionize the Universe. Accurate knowledge of stellar masses, ages, and star formation rates at this epoch requires measuring both rest-frame UV and optical light, which only Spitzer and HST can probe at z 6-11 for a large enough sample of typical galaxies. To address this cosmic puzzle, we propose Spitzer imaging of the fields behind 3 most powerful cosmic telescopes selected using HST, Spitzer, and Planck data from the RELICS and SRELICS programs (Reionization Lensing Cluster Survey; 41 clusters, 190 HST orbits, 390 Spitzer hours). This proposal will be a valuable Legacy complement to the existing IRAC deep surveys, and it will open up a new parameter space by probing the ordinary yet magnified population with much improved sample variance. The program will allow us to study stellar properties of a large number, 30 galaxies at z 6-11. Deep Spitzer data will be crucial to unambiguously measure their stellar properties (age, SFR, M*). Finally this proposal will establish the presence (or absence) of an unusually early established stellar population, as was recently observed in MACS1149JD at z 9. If confirmed in a larger sample, this result will require a paradigm shift in our understanding of the earliest star formation.
Design of pilot studies to inform the construction of composite outcome measures.
Edland, Steven D; Ard, M Colin; Li, Weiwei; Jiang, Lingjing
2017-06-01
Composite scales have recently been proposed as outcome measures for clinical trials. For example, the Prodromal Alzheimer's Cognitive Composite (PACC) is the sum of z-score normed component measures assessing episodic memory, timed executive function, and global cognition. Alternative methods of calculating composite total scores using the weighted sum of the component measures that maximize signal-to-noise of the resulting composite score have been proposed. Optimal weights can be estimated from pilot data, but it is an open question how large a pilot trial is required to calculate reliably optimal weights. In this manuscript, we describe the calculation of optimal weights, and use large-scale computer simulations to investigate the question of how large a pilot study sample is required to inform the calculation of optimal weights. The simulations are informed by the pattern of decline observed in cognitively normal subjects enrolled in the Alzheimer's Disease Cooperative Study (ADCS) Prevention Instrument cohort study, restricting to n=75 subjects age 75 and over with an ApoE E4 risk allele and therefore likely to have an underlying Alzheimer neurodegenerative process. In the context of secondary prevention trials in Alzheimer's disease, and using the components of the PACC, we found that pilot studies as small as 100 are sufficient to meaningfully inform weighting parameters. Regardless of the pilot study sample size used to inform weights, the optimally weighted PACC consistently outperformed the standard PACC in terms of statistical power to detect treatment effects in a clinical trial. Pilot studies of size 300 produced weights that achieved near-optimal statistical power, and reduced required sample size relative to the standard PACC by more than half. These simulations suggest that modestly sized pilot studies, comparable to that of a phase 2 clinical trial, are sufficient to inform the construction of composite outcome measures. Although these findings apply only to the PACC in the context of prodromal AD, the observation that weights only have to approximate the optimal weights to achieve near-optimal performance should generalize. Performing a pilot study or phase 2 trial to inform the weighting of proposed composite outcome measures is highly cost-effective. The net effect of more efficient outcome measures is that smaller trials will be required to test novel treatments. Alternatively, second generation trials can use prior clinical trial data to inform weighting, so that greater efficiency can be achieved as we move forward.
2009-09-10
Viper Plague, a mimic of heartwater, and associated ticks entered the USA in 2002 • VP rickettsia was isolated in viper cells and propagated in turtle...Viral association with the elusive rickettsia of viper plague from Ghana, West Africa. Annals of the New York Academy of Sciences 1149, 318-321...2008). Approved for public release; distribution unlimited Centrifuged Bovine Endothelial Cell Supernatant Showing Rickettsia (requires many large
FFT-local gravimetric geoid computation
NASA Technical Reports Server (NTRS)
Nagy, Dezso; Fury, Rudolf J.
1989-01-01
Model computations show that changes of sampling interval introduce only 0.3 cm changes, whereas zero padding provides an improvement of more than 5 cm in the fast Fourier transformation (FFT) generated geoid. For the Global Positioning System (GPS) survey of Franklin County, Ohio, the parameters selected as a result of model computations, allow large reduction in local data requirements while still retaining the cm accuracy when tapering and padding is applied. The results are shown in tables.
Kurt W. Gottschalk
1985-01-01
Optimum light levels for shelterwood cutting to develop the large advance regeneration that require were investigated using eight shade-cloth treatments. Seedlings of northern red oak, black oak, black cherry and red maple were grow under these light treatments for 2 years. Height and diameter were measured annually, and samples were harvested for dry weight and leaf...
The Stratigraphy and Evolution of the Lunar Crust
NASA Technical Reports Server (NTRS)
McCallum, I. Stewart
1998-01-01
Reconstruction of stratigraphic relationships in the ancient lunar crust has proved to be a formidable task. The intense bombardment during the first 700 m.y. of lunar history has severely perturbed the original stratigraphy and destroyed the primary textures of all but a few nonmare rocks. However, a knowledge of the crustal stratigraphy as it existed prior to the cataclysmic bombardment about 3.9 Ga is essential to test the major models proposed for crustal origin, i.e., crystal fractionation in a global magmasphere or serial magmatism in a large number of smaller bodies. Despite the large difference in scale implicit in these two models, both require an efficient separation of plagioclase and mafic minerals to form the anorthositic crust and the mafic mantle. Despite the havoc wreaked by the large body impactors, these same impact processes have brought to the lunar surface crystalline samples derived from at least the upper half of the lunar crust, thereby providing an opportunity to reconstruct the stratigraphy in areas sampled by the Apollo missions. As noted, ejecta from the large multiring basins are dominantly, or even exclusively, of crustal origin. Given the most recent determinations of crustal thicknesses, this implies an upper limit to the depth of excavation of about 60 km. Of all the lunar samples studied, a small set has been recognized as "pristine", and within this pristine group, a small fraction have retained some vestiges of primary features formed during the earliest stages of crystallization or recrystallization prior to 4.0 Ga. We have examined a number of these samples that have retained some record of primary crystallization to deduce thermal histories from an analysis of structural, textural, and compositional features in minerals from these samples. Specifically, by quantitative modeling of (1) the growth rate and development of compositional profiles of exsolution lamellae in pyroxenes and (2) the rate of Fe-Mg ordering in orthopyroxenes, we can constrain the cooling rates of appropriate lunar samples. These cooling rates are used to compute depths of burial at the time of crystallization, which enable us to reconstruct parts of the crustal stratigraphy as it existed during the earliest stages of lunar history.
NASA Technical Reports Server (NTRS)
Madsen, Soren; Komar, George (Technical Monitor)
2001-01-01
A GEO-based Synthetic Aperture Radar (SAR) could provide daily coverage of basically all of North and South America with very good temporal coverage within the mapped area. This affords a key capability to disaster management, tectonic mapping and modeling, and vegetation mapping. The fine temporal sampling makes this system particularly useful for disaster management of flooding, hurricanes, and earthquakes. By using a fairly long wavelength, changing water boundaries caused by storms or flooding could be monitored in near real-time. This coverage would also provide revolutionary capabilities in the field of radar interferometry, including the capability to study the interferometric signature immediately before and after an earthquake, thus allowing unprecedented studies of Earth-surface dynamics. Preeruptive volcano dynamics could be studied as well as pre-seismic deformation, one of the most controversial and elusive aspects of earthquakes. Interferometric correlation would similarly allow near real-time mapping of surface changes caused by volcanic eruptions, mud slides, or fires. Finally, a GEO SAR provides an optimum configuration for soil moisture measurement that requires a high temporal sampling rate (1-2 days) with a moderate spatial resolution (1 km or better). From a technological point of view, the largest challenges involved in developing a geosynchronous SAR capability relate to the very large slant range distance from the radar to the mapped area. This leads to requirements for large power or alternatively very large antenna, the ability to steer the mapping area to the left and right of the satellite, and control of the elevation and azimuth angles. The weight of this system is estimated to be 2750 kg and it would require 20 kW of DC-power. Such a system would provide up to a 600 km ground swath in a strip-mapping mode and 4000 km dual-sided mapping in a scan-SAR mode.
Identification of missing variants by combining multiple analytic pipelines.
Ren, Yingxue; Reddy, Joseph S; Pottier, Cyril; Sarangi, Vivekananda; Tian, Shulan; Sinnwell, Jason P; McDonnell, Shannon K; Biernacka, Joanna M; Carrasquillo, Minerva M; Ross, Owen A; Ertekin-Taner, Nilüfer; Rademakers, Rosa; Hudson, Matthew; Mainzer, Liudmila Sergeevna; Asmann, Yan W
2018-04-16
After decades of identifying risk factors using array-based genome-wide association studies (GWAS), genetic research of complex diseases has shifted to sequencing-based rare variants discovery. This requires large sample sizes for statistical power and has brought up questions about whether the current variant calling practices are adequate for large cohorts. It is well-known that there are discrepancies between variants called by different pipelines, and that using a single pipeline always misses true variants exclusively identifiable by other pipelines. Nonetheless, it is common practice today to call variants by one pipeline due to computational cost and assume that false negative calls are a small percent of total. We analyzed 10,000 exomes from the Alzheimer's Disease Sequencing Project (ADSP) using multiple analytic pipelines consisting of different read aligners and variant calling strategies. We compared variants identified by using two aligners in 50,100, 200, 500, 1000, and 1952 samples; and compared variants identified by adding single-sample genotyping to the default multi-sample joint genotyping in 50,100, 500, 2000, 5000 and 10,000 samples. We found that using a single pipeline missed increasing numbers of high-quality variants correlated with sample sizes. By combining two read aligners and two variant calling strategies, we rescued 30% of pass-QC variants at sample size of 2000, and 56% at 10,000 samples. The rescued variants had higher proportions of low frequency (minor allele frequency [MAF] 1-5%) and rare (MAF < 1%) variants, which are the very type of variants of interest. In 660 Alzheimer's disease cases with earlier onset ages of ≤65, 4 out of 13 (31%) previously-published rare pathogenic and protective mutations in APP, PSEN1, and PSEN2 genes were undetected by the default one-pipeline approach but recovered by the multi-pipeline approach. Identification of the complete variant set from sequencing data is the prerequisite of genetic association analyses. The current analytic practice of calling genetic variants from sequencing data using a single bioinformatics pipeline is no longer adequate with the increasingly large projects. The number and percentage of quality variants that passed quality filters but are missed by the one-pipeline approach rapidly increased with sample size.
Dynamic measurements of CO diffusing capacity using discrete samples of alveolar gas.
Graham, B L; Mink, J T; Cotton, D J
1983-01-01
It has been shown that measurements of the diffusing capacity of the lung for CO made during a slow exhalation [DLCO(exhaled)] yield information about the distribution of the diffusing capacity in the lung that is not available from the commonly measured single-breath diffusing capacity [DLCO(SB)]. Current techniques of measuring DLCO(exhaled) require the use of a rapid-responding (less than 240 ms, 10-90%) CO meter to measure the CO concentration in the exhaled gas continuously during exhalation. DLCO(exhaled) is then calculated using two sample points in the CO signal. Because DLCO(exhaled) calculations are highly affected by small amounts of noise in the CO signal, filtering techniques have been used to reduce noise. However, these techniques reduce the response time of the system and may introduce other errors into the signal. We have developed an alternate technique in which DLCO(exhaled) can be calculated using the concentration of CO in large discrete samples of the exhaled gas, thus eliminating the requirement of a rapid response time in the CO analyzer. We show theoretically that this method is as accurate as other DLCO(exhaled) methods but is less affected by noise. These findings are verified in comparisons of the discrete-sample method of calculating DLCO(exhaled) to point-sample methods in normal subjects, patients with emphysema, and patients with asthma.
Improving small-angle X-ray scattering data for structural analyses of the RNA world
Rambo, Robert P.; Tainer, John A.
2010-01-01
Defining the shape, conformation, or assembly state of an RNA in solution often requires multiple investigative tools ranging from nucleotide analog interference mapping to X-ray crystallography. A key addition to this toolbox is small-angle X-ray scattering (SAXS). SAXS provides direct structural information regarding the size, shape, and flexibility of the particle in solution and has proven powerful for analyses of RNA structures with minimal requirements for sample concentration and volumes. In principle, SAXS can provide reliable data on small and large RNA molecules. In practice, SAXS investigations of RNA samples can show inconsistencies that suggest limitations in the SAXS experimental analyses or problems with the samples. Here, we show through investigations on the SAM-I riboswitch, the Group I intron P4-P6 domain, 30S ribosomal subunit from Sulfolobus solfataricus (30S), brome mosaic virus tRNA-like structure (BMV TLS), Thermotoga maritima asd lysine riboswitch, the recombinant tRNAval, and yeast tRNAphe that many problems with SAXS experiments on RNA samples derive from heterogeneity of the folded RNA. Furthermore, we propose and test a general approach to reducing these sample limitations for accurate SAXS analyses of RNA. Together our method and results show that SAXS with synchrotron radiation has great potential to provide accurate RNA shapes, conformations, and assembly states in solution that inform RNA biological functions in fundamental ways. PMID:20106957
Dodd, Lori E; Korn, Edward L; Freidlin, Boris; Gu, Wenjuan; Abrams, Jeffrey S; Bushnell, William D; Canetta, Renzo; Doroshow, James H; Gray, Robert J; Sridhara, Rajeshwari
2013-10-01
Measurement error in time-to-event end points complicates interpretation of treatment effects in clinical trials. Non-differential measurement error is unlikely to produce large bias [1]. When error depends on treatment arm, bias is of greater concern. Blinded-independent central review (BICR) of all images from a trial is commonly undertaken to mitigate differential measurement-error bias that may be present in hazard ratios (HRs) based on local evaluations. Similar BICR and local evaluation HRs may provide reassurance about the treatment effect, but BICR adds considerable time and expense to trials. We describe a BICR audit strategy [2] and apply it to five randomized controlled trials to evaluate its use and to provide practical guidelines. The strategy requires BICR on a subset of study subjects, rather than a complete-case BICR, and makes use of an auxiliary-variable estimator. When the effect size is relatively large, the method provides a substantial reduction in the size of the BICRs. In a trial with 722 participants and a HR of 0.48, an average audit of 28% of the data was needed and always confirmed the treatment effect as assessed by local evaluations. More moderate effect sizes and/or smaller trial sizes required larger proportions of audited images, ranging from 57% to 100% for HRs ranging from 0.55 to 0.77 and sample sizes between 209 and 737. The method is developed for a simple random sample of study subjects. In studies with low event rates, more efficient estimation may result from sampling individuals with events at a higher rate. The proposed strategy can greatly decrease the costs and time associated with BICR, by reducing the number of images undergoing review. The savings will depend on the underlying treatment effect and trial size, with larger treatment effects and larger trials requiring smaller proportions of audited data.
An epidemiological perspective of personalized medicine: the Estonian experience
Milani, L; Leitsalu, L; Metspalu, A
2015-01-01
Milani L, Leitsalu L, Metspalu A (University of Tartu). An epidemiological perspective of personalized medicine: the Estonian experience (Review). J Intern Med 2015; 277: 188–200. The Estonian Biobank and several other biobanks established over a decade ago are now starting to yield valuable longitudinal follow-up data for large numbers of individuals. These samples have been used in hundreds of different genome-wide association studies, resulting in the identification of reliable disease-associated variants. The focus of genomic research has started to shift from identifying genetic and nongenetic risk factors associated with common complex diseases to understanding the underlying mechanisms of the diseases and suggesting novel targets for therapy. However, translation of findings from genomic research into medical practice is still lagging, mainly due to insufficient evidence of clinical validity and utility. In this review, we examine the different elements required for the implementation of personalized medicine based on genomic information. First, biobanks and genome centres are required and have been established for the high-throughput genomic screening of large numbers of samples. Secondly, the combination of susceptibility alleles into polygenic risk scores has improved risk prediction of cardiovascular disease, breast cancer and several other diseases. Finally, national health information systems are being developed internationally, to combine data from electronic medical records from different sources, and also to gradually incorporate genomic information. We focus on the experience in Estonia, one of several countries with national goals towards more personalized health care based on genomic information, where the unique combination of elements required to accomplish this goal are already in place. PMID:25339628
Pierce, Brandon L; Ahsan, Habibul; Vanderweele, Tyler J
2011-06-01
Mendelian Randomization (MR) studies assess the causality of an exposure-disease association using genetic determinants [i.e. instrumental variables (IVs)] of the exposure. Power and IV strength requirements for MR studies using multiple genetic variants have not been explored. We simulated cohort data sets consisting of a normally distributed disease trait, a normally distributed exposure, which affects this trait and a biallelic genetic variant that affects the exposure. We estimated power to detect an effect of exposure on disease for varying allele frequencies, effect sizes and samples sizes (using two-stage least squares regression on 10,000 data sets-Stage 1 is a regression of exposure on the variant. Stage 2 is a regression of disease on the fitted exposure). Similar analyses were conducted using multiple genetic variants (5, 10, 20) as independent or combined IVs. We assessed IV strength using the first-stage F statistic. Simulations of realistic scenarios indicate that MR studies will require large (n > 1000), often very large (n > 10,000), sample sizes. In many cases, so-called 'weak IV' problems arise when using multiple variants as independent IVs (even with as few as five), resulting in biased effect estimates. Combining genetic factors into fewer IVs results in modest power decreases, but alleviates weak IV problems. Ideal methods for combining genetic factors depend upon knowledge of the genetic architecture underlying the exposure. The feasibility of well-powered, unbiased MR studies will depend upon the amount of variance in the exposure that can be explained by known genetic factors and the 'strength' of the IV set derived from these genetic factors.
NASA Astrophysics Data System (ADS)
Webb, N.; Chappell, A.; Van Zee, J.; Toledo, D.; Duniway, M.; Billings, B.; Tedela, N.
2017-12-01
Anthropogenic land use and land cover change (LULCC) influence global rates of wind erosion and dust emission, yet our understanding of the magnitude of the responses remains poor. Field measurements and monitoring provide essential data to resolve aeolian sediment transport patterns and assess the impacts of human land use and management intensity. Data collected in the field are also required for dust model calibration and testing, as models have become the primary tool for assessing LULCC-dust cycle interactions. However, there is considerable uncertainty in estimates of dust emission due to the spatial variability of sediment transport. Field sampling designs are currently rudimentary and considerable opportunities are available to reduce the uncertainty. Establishing the minimum detectable change is critical for measuring spatial and temporal patterns of sediment transport, detecting potential impacts of LULCC and land management, and for quantifying the uncertainty of dust model estimates. Here, we evaluate the effectiveness of common sampling designs (e.g., simple random sampling, systematic sampling) used to measure and monitor aeolian sediment transport rates. Using data from the US National Wind Erosion Research Network across diverse rangeland and cropland cover types, we demonstrate how only large changes in sediment mass flux (of the order 200% to 800%) can be detected when small sample sizes are used, crude sampling designs are implemented, or when the spatial variation is large. We then show how statistical rigour and the straightforward application of a sampling design can reduce the uncertainty and detect change in sediment transport over time and between land use and land cover types.
NASA Astrophysics Data System (ADS)
Rufino, Marta M.; Baptista, Paulo; Pereira, Fábio; Gaspar, Miguel B.
2018-01-01
In the current work we propose a new method to sample surface sediment during bivalve fishing surveys. Fishing institutes all around the word carry out regular surveys with the aim of monitoring the stocks of commercial species. These surveys comprise often more than one hundred of sampling stations and cover large geographical areas. Although superficial sediment grain sizes are among the main drivers of benthic communities and provide crucial information for studies on coastal dynamics, overall there is a strong lack of this type of data, possibly, because traditional surface sediment sampling methods use grabs, that require considerable time and effort to be carried out on regular basis or on large areas. In face of these aspects, we developed an easy and un-expensive method to sample superficial sediments, during bivalve fisheries monitoring surveys, without increasing survey time or human resources. The method was successfully evaluated and validated during a typical bivalve survey carried out on the Northwest coast of Portugal, confirming that it had any interference with the survey objectives. Furthermore, the method was validated by collecting samples using a traditional Van Veen grabs (traditional method), which showed a similar grain size composition to the ones collected by the new method, on the same localities. We recommend that the procedure is implemented on regular bivalve fishing surveys, together with an image analysis system to analyse the collected samples. The new method will provide substantial quantity of data on surface sediment in coastal areas, using a non-expensive and efficient manner, with a high potential application in different fields of research.
Calvani, Nichola Eliza Davies; Windsor, Peter Andrew; Bush, Russell David; Šlapeta, Jan
2017-09-01
Fasciolosis, due to Fasciola hepatica and Fasciola gigantica, is a re-emerging zoonotic parasitic disease of worldwide importance. Human and animal infections are commonly diagnosed by the traditional sedimentation and faecal egg-counting technique. However, this technique is time-consuming and prone to sensitivity errors when a large number of samples must be processed or if the operator lacks sufficient experience. Additionally, diagnosis can only be made once the 12-week pre-patent period has passed. Recently, a commercially available coprological antigen ELISA has enabled detection of F. hepatica prior to the completion of the pre-patent period, providing earlier diagnosis and increased throughput, although species differentiation is not possible in areas of parasite sympatry. Real-time PCR offers the combined benefits of highly sensitive species differentiation for medium to large sample sizes. However, no molecular diagnostic workflow currently exists for the identification of Fasciola spp. in faecal samples. A new molecular diagnostic workflow for the highly-sensitive detection and quantification of Fasciola spp. in faecal samples was developed. The technique involves sedimenting and pelleting the samples prior to DNA isolation in order to concentrate the eggs, followed by disruption by bead-beating in a benchtop homogeniser to ensure access to DNA. Although both the new molecular workflow and the traditional sedimentation technique were sensitive and specific, the new molecular workflow enabled faster sample throughput in medium to large epidemiological studies, and provided the additional benefit of speciation. Further, good correlation (R2 = 0.74-0.76) was observed between the real-time PCR values and the faecal egg count (FEC) using the new molecular workflow for all herds and sampling periods. Finally, no effect of storage in 70% ethanol was detected on sedimentation and DNA isolation outcomes; enabling transport of samples from endemic to non-endemic countries without the requirement of a complete cold chain. The commercially-available ELISA displayed poorer sensitivity, even after adjustment of the positive threshold (65-88%), compared to the sensitivity (91-100%) of the new molecular diagnostic workflow. Species-specific assays for sensitive detection of Fasciola spp. enable ante-mortem diagnosis in both human and animal settings. This includes Southeast Asia where there are potentially many undocumented human cases and where post-mortem examination of production animals can be difficult. The new molecular workflow provides a sensitive and quantitative diagnostic approach for the rapid testing of medium to large sample sizes, potentially superseding the traditional sedimentation and FEC technique and enabling surveillance programs in locations where animal and human health funding is limited.
Foreign body detection in food materials using compton scattered x-rays
NASA Astrophysics Data System (ADS)
McFarlane, Nigel James Bruce
This thesis investigated the application of X-ray Compton scattering to the problem of foreign body detection in food. The methods used were analytical modelling, simulation and experiment. A criterion was defined for detectability, and a model was developed for predicting the minimum time required for detection. The model was used to predict the smallest detectable cubes of air, glass, plastic and steel. Simulations and experiments were performed on voids and glass in polystyrene phantoms, water, coffee and muesli. Backscatter was used to detect bones in chicken meat. The effects of geometry and multiple scatter on contrast, signal-to-noise, and detection time were simulated. Compton scatter was compared with transmission, and the effect of inhomogeneity was modelled. Spectral shape was investigated as a means of foreign body detection. A signal-to-noise ratio of 7.4 was required for foreign body detection in food. A 0.46 cm cube of glass or a 1.19 cm cube of polystyrene were detectable in a 10 cm cube of water in one second. The minimum time to scan a whole sample varied as the 7th power of the foreign body size, and the 5th power of the sample size. Compton scatter inspection produced higher contrasts than transmission, but required longer measurement times because of the low number of photon counts. Compton scatter inspection of whole samples was very slow compared to production line speeds in the food industry. There was potential for Compton scatter in applications which did not require whole-sample scanning, such as surface inspection. There was also potential in the inspection of inhomogeneous samples. The multiple scatter fraction varied from 25% to 55% for 2 to 10 cm cubes of water, but did not have a large effect on the detection time. The spectral shape gave good contrasts and signal-to-noise ratios in the detection of chicken bones.
Evaluating cost-efficiency and accuracy of hunter harvest survey designs
Lukacs, P.M.; Gude, J.A.; Russell, R.E.; Ackerman, B.B.
2011-01-01
Effective management of harvested wildlife often requires accurate estimates of the number of animals harvested annually by hunters. A variety of techniques exist to obtain harvest data, such as hunter surveys, check stations, mandatory reporting requirements, and voluntary reporting of harvest. Agencies responsible for managing harvested wildlife such as deer (Odocoileus spp.), elk (Cervus elaphus), and pronghorn (Antilocapra americana) are challenged with balancing the cost of data collection versus the value of the information obtained. We compared precision, bias, and relative cost of several common strategies, including hunter self-reporting and random sampling, for estimating hunter harvest using a realistic set of simulations. Self-reporting with a follow-up survey of hunters who did not report produces the best estimate of harvest in terms of precision and bias, but it is also, by far, the most expensive technique. Self-reporting with no followup survey risks very large bias in harvest estimates, and the cost increases with increased response rate. Probability-based sampling provides a substantial cost savings, though accuracy can be affected by nonresponse bias. We recommend stratified random sampling with a calibration estimator used to reweight the sample based on the proportions of hunters responding in each covariate category as the best option for balancing cost and accuracy. ?? 2011 The Wildlife Society.
Alexandrou, Lydon D; Spencer, Michelle J S; Morrison, Paul D; Meehan, Barry J; Jones, Oliver A H
2015-04-15
Solid phase extraction is one of the most commonly used pre-concentration and cleanup steps in environmental science. However, traditional methods need electrically powered pumps, can use large volumes of solvent (if multiple samples are run), and require several hours to filter a sample. Additionally, if the cartridge is open to the air volatile compounds may be lost and sample integrity compromised. In contrast, micro cartridge based solid phase extraction can be completed in less than 2 min by hand, uses only microlitres of solvent and provides comparable concentration factors to established methods. It is also an enclosed system so volatile components are not lost. The sample can also be eluted directly into a detector (e.g. a mass spectrometer) if required. However, the technology is new and has not been much used for environmental analysis. In this study we compare traditional (macro) and the new micro solid phase extraction for the analysis of four common volatile trihalomethanes (trichloromethane, bromodichloromethane, dibromochloromethane and tribromomethane). The results demonstrate that micro solid phase extraction is faster and cheaper than traditional methods with similar recovery rates for the target compounds. This method shows potential for further development in a range of applications. Copyright © 2015 Elsevier B.V. All rights reserved.
Al-Chokhachy, R.; Budy, P.; Conner, M.
2009-01-01
Using empirical field data for bull trout (Salvelinus confluentus), we evaluated the trade-off between power and sampling effort-cost using Monte Carlo simulations of commonly collected mark-recapture-resight and count data, and we estimated the power to detect changes in abundance across different time intervals. We also evaluated the effects of monitoring different components of a population and stratification methods on the precision of each method. Our results illustrate substantial variability in the relative precision, cost, and information gained from each approach. While grouping estimates by age or stage class substantially increased the precision of estimates, spatial stratification of sampling units resulted in limited increases in precision. Although mark-resight methods allowed for estimates of abundance versus indices of abundance, our results suggest snorkel surveys may be a more affordable monitoring approach across large spatial scales. Detecting a 25% decline in abundance after 5 years was not possible, regardless of technique (power = 0.80), without high sampling effort (48% of study site). Detecting a 25% decline was possible after 15 years, but still required high sampling efforts. Our results suggest detecting moderate changes in abundance of freshwater salmonids requires considerable resource and temporal commitments and highlight the difficulties of using abundance measures for monitoring bull trout populations.
Krõlov, Katrin; Frolova, Jekaterina; Tudoran, Oana; Suhorutsenko, Julia; Lehto, Taavi; Sibul, Hiljar; Mäger, Imre; Laanpere, Made; Tulp, Indrek; Langel, Ülo
2014-01-01
Chlamydia trachomatis is the most common sexually transmitted human pathogen. Infection results in minimal to no symptoms in approximately two-thirds of women and therefore often goes undiagnosed. C. trachomatis infections are a major public health concern because of the potential severe long-term consequences, including an increased risk of ectopic pregnancy, chronic pelvic pain, and infertility. To date, several point-of-care tests have been developed for C. trachomatis diagnostics. Although many of them are fast and specific, they lack the required sensitivity for large-scale application. We describe a rapid and sensitive form of detection directly from urine samples. The assay uses recombinase polymerase amplification and has a minimum detection limit of 5 to 12 pathogens per test. Furthermore, it enables detection within 20 minutes directly from urine samples without DNA purification before the amplification reaction. Initial analysis of the assay from clinical patient samples had a specificity of 100% (95% CI, 92%-100%) and a sensitivity of 83% (95% CI, 51%-97%). The whole procedure is fairly simple and does not require specific machinery, making it potentially applicable in point-of-care settings. Copyright © 2014 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Grazziotin, Maria Celestina Bonzanini; Grazziotin, Ana Laura; Vidal, Newton Medeiros; Freire, Marcia Helena de Souza; da Silva, Regina Paula Guimarães Vieira Cavalcante
2016-08-01
Milk safety is an important concern in neonatal units and human milk banks. Therefore, evidence-based recommendations regarding raw milk handling and storage are needed to safely promote supplying hospitalized infants with their mother's own milk. To evaluate raw human milk storage methods according to Brazilian milk management regulations by investigating the effects of refrigeration (5°C) for 12 hours and freezing (-20°C) for 15 days on the acidity and energy content in a large number of raw milk samples. Expressed milk samples from 100 distinct donors were collected in glass bottles. Each sample was separated into 3 equal portions that were analyzed at room temperature and after either 12 hours of refrigeration or 15 days of freezing. Milk acidity and energy content were determined by Dornic titration and creamatocrit technique, respectively. All samples showed Dornic acidity values within the established acceptable limit (≤ 8°D), as required by Brazilian regulations. In addition, energy content did not significantly differ among fresh, refrigerated and frozen milk samples (median of ~50 kcal/100 mL for each). Most samples tested (> 80%) were considered top quality milk (< 4°D) based on acidity values, and milk energy content was preserved after storage. We conclude that the storage methods required by Brazilian regulations are suitable to ensure milk safety and energy content of stored milk when supplied to neonates. © The Author(s) 2016.
Development of aptamers against unpurified proteins.
Goto, Shinichi; Tsukakoshi, Kaori; Ikebukuro, Kazunori
2017-12-01
SELEX (Systematic Evolution of Ligands by EXponential enrichment) has been widely used for the generation of aptamers against target proteins. However, its requirement for pure target proteins remains a major problem in aptamer selection, as procedures for protein purification from crude bio-samples are not only complicated but also time and labor consuming. This is because native proteins can be found in a large number of diverse forms because of posttranslational modifications and their complicated molecular conformations. Moreover, several proteins are difficult to purify owing to their chemical fragility and/or rarity in native samples. An alternative route is the use of recombinant proteins for aptamer selection, because they are homogenous and easily purified. However, aptamers generated against recombinant proteins produced in prokaryotic cells may not interact with the same proteins expressed in eukaryotic cells because of posttranslational modifications. Moreover, to date recombinant proteins have been constructed for only a fraction of proteins expressed in the human body. Therefore, the demand for advanced SELEX methods not relying on complicated purification processes from native samples or recombinant proteins is growing. This review article describes several such techniques that allow researchers to directly develop an aptamer from various unpurified samples, such as whole cells, tissues, serum, and cell lysates. The key advantages of advanced SELEX are that it does not require a purification process from a crude bio-sample, maintains the functional states of target proteins, and facilitates the development of aptamers against unidentified and uncharacterized proteins in unpurified biological samples. © 2017 Wiley Periodicals, Inc.
Optimal probes for withdrawal of uncontaminated fluid samples
NASA Astrophysics Data System (ADS)
Sherwood, J. D.
2005-08-01
Withdrawal of fluid by a composite probe pushed against the face z =0 of a porous half-space z >0 is modeled assuming incompressible Darcy flow. The probe is circular, of radius a, with an inner sampling section of radius αa and a concentric outer guard probe αa
NASA Astrophysics Data System (ADS)
Yuan, Chao; Chareyre, Bruno; Darve, Félix
2016-09-01
A pore-scale model is introduced for two-phase flow in dense packings of polydisperse spheres. The model is developed as a component of a more general hydromechanical coupling framework based on the discrete element method, which will be elaborated in future papers and will apply to various processes of interest in soil science, in geomechanics and in oil and gas production. Here the emphasis is on the generation of a network of pores mapping the void space between spherical grains, and the definition of local criteria governing the primary drainage process. The pore space is decomposed by Regular Triangulation, from which a set of pores connected by throats are identified. A local entry capillary pressure is evaluated for each throat, based on the balance of capillary pressure and surface tension at equilibrium. The model reflects the possible entrapment of disconnected patches of the receding wetting phase. It is validated by a comparison with drainage experiments. In the last part of the paper, a series of simulations are reported to illustrate size and boundary effects, key questions when studying small samples made of spherical particles be it in simulations or experiments. Repeated tests on samples of different sizes give evolution of water content which are not only scattered but also strongly biased for small sample sizes. More than 20,000 spheres are needed to reduce the bias on saturation below 0.02. Additional statistics are generated by subsampling a large sample of 64,000 spheres. They suggest that the minimal sampling volume for evaluating saturation is one hundred times greater that the sampling volume needed for measuring porosity with the same accuracy. This requirement in terms of sample size induces a need for efficient computer codes. The method described herein has a low algorithmic complexity in order to satisfy this requirement. It will be well suited to further developments toward coupled flow-deformation problems in which evolution of the microstructure require frequent updates of the pore network.
Márquez Neila, Pablo; Baumela, Luis; González-Soriano, Juncal; Rodríguez, Jose-Rodrigo; DeFelipe, Javier; Merchán-Pérez, Ángel
2016-04-01
Recent electron microscopy (EM) imaging techniques permit the automatic acquisition of a large number of serial sections from brain samples. Manual segmentation of these images is tedious, time-consuming and requires a high degree of user expertise. Therefore, there is considerable interest in developing automatic segmentation methods. However, currently available methods are computationally demanding in terms of computer time and memory usage, and to work properly many of them require image stacks to be isotropic, that is, voxels must have the same size in the X, Y and Z axes. We present a method that works with anisotropic voxels and that is computationally efficient allowing the segmentation of large image stacks. Our approach involves anisotropy-aware regularization via conditional random field inference and surface smoothing techniques to improve the segmentation and visualization. We have focused on the segmentation of mitochondria and synaptic junctions in EM stacks from the cerebral cortex, and have compared the results to those obtained by other methods. Our method is faster than other methods with similar segmentation results. Our image regularization procedure introduces high-level knowledge about the structure of labels. We have also reduced memory requirements with the introduction of energy optimization in overlapping partitions, which permits the regularization of very large image stacks. Finally, the surface smoothing step improves the appearance of three-dimensional renderings of the segmented volumes.
Omics for Precious Rare Biosamples: Characterization of Ancient Human Hair by a Proteomic Approach.
Fresnais, Margaux; Richardin, Pascale; Sepúlveda, Marcela; Leize-Wagner, Emmanuelle; Charrié-Duhaut, Armelle
2017-07-01
Omics technologies have far-reaching applications beyond clinical medicine. A case in point is the analysis of ancient hair samples. Indeed, hair is an important biological indicator that has become a material of choice in archeometry to study the ancient civilizations and their environment. Current characterization of ancient hair is based on elemental and structural analyses, but only few studies have focused on the molecular aspects of ancient hair proteins-keratins-and their conservation state. In such cases, applied extraction protocols require large amounts of raw hair, from 30 to 100 mg. In the present study, we report an optimized new proteomic approach to accurately identify archeological hair proteins, and assess their preservation state, while using a minimum of raw material. Testing and adaptation of three protocols and of nano liquid chromatography-tandem mass spectrometry (nanoLC-MS/MS) parameters were performed on modern hair. On the basis of mass spectrometry data quality, and of the required initial sample amount, the most promising workflow was selected and applied to an ancient archeological sample, dated to about 3880 years before present. Finally, and importantly, we were able to identify 11 ancient hair proteins and to visualize the preservation state of mummy's hair from only 500 μg of raw material. The results presented here pave the way for new insights into the understanding of hair protein alteration processes such as those due to aging and ecological exposures. This work could enable omics scientists to apply a proteomic approach to precious and rare samples, not only in the context of archeometrical studies but also for future applications that would require the use of very small amounts of sample.
NASA Astrophysics Data System (ADS)
Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin
2017-06-01
Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.
Microplastic pollution in the Northeast Atlantic Ocean: validated and opportunistic sampling.
Lusher, Amy L; Burke, Ann; O'Connor, Ian; Officer, Rick
2014-11-15
Levels of marine debris, including microplastics, are largely un-documented in the Northeast Atlantic Ocean. Broad scale monitoring efforts are required to understand the distribution, abundance and ecological implications of microplastic pollution. A method of continuous sampling was developed to be conducted in conjunction with a wide range of vessel operations to maximise vessel time. Transects covering a total of 12,700 km were sampled through continuous monitoring of open ocean sub-surface water resulting in 470 samples. Items classified as potential plastics were identified in 94% of samples. A total of 2315 particles were identified, 89% were less than 5mm in length classifying them as microplastics. Average plastic abundance in the Northeast Atlantic was calculated as 2.46 particles m(-3). This is the first report to demonstrate the ubiquitous nature of microplastic pollution in the Northeast Atlantic Ocean and to present a potential method for standardised monitoring of microplastic pollution. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Paniagua, J.; Powell, J. R.; Maise, G.
2002-01-01
We have conducted studies of a revolutionary new concept for conducting a Europa Sample Return Mission. Robotic spacecraft exploration of the Solar System has been severely constrained by the large energy requirements of interplanetary trajectories and the inherent delta V limitations of chemical rockets. Current missions use gravitational assists from intermediate planets to achieve these high-energy trajectories restricting payload size and increasing flight times. We propose a 6-year Europa Sample Return mission with very modest launch requirements enabled by MITEE. A new nuclear thermal propulsion engine design, termed MITEE (MIniature reacTor EnginE), has over twice the delta V capability of H2/O2 rockets (and much greater when refueled with H2 propellant from indigenous extraterrestrial resources) enabling unique missions that are not feasible with chemical propulsion. The MITEE engine is a compact, ultra-lightweight, thermal nuclear rocket that uses hydrogen as the propellant. MITEE, with its small size (50 cm O.D.), low mass (200 kg), and high specific impulse (~1000 sec), can provide a quantum leap in the capability for space science and exploration missions. The Robotic Europa Explorer (REE) spacecraft has a two-year outbound direct trajectory and lands on the satellite surface for an approximate 9 month stay. During this time, the vehicle is refueled with H2 propellant derived from Europa ice by the Autonomous Propellant Producer (APP), while collecting samples and searching for life. A small nuclear-heated submarine probe, the Autonomous Submarine Vehicle (ASV), based on MITEE technology, would melt through the ice and explore the undersea realm. The spacecraft has approximately a three year return to Earth after departure from Europa with samples onboard. Spacecraft payload is 430 kg at the start of the mission and can be launched with a single, conventional medium-sized Delta III booster. The spacecraft can bring back 25 kg of samples from Europa. Europa, in the Jovian system, is a high priority target for an outer Solar System exploration mission. More than a decade ago the Voyager spacecraft revealed Europa as a world swathed in ice and geologically young. NASA's Galileo spacecraft passed approximately 500 miles above the surface and provided detailed images of Europa's terrain marked by a dynamic topology that appeared to be remnants of ice volcanoes or geysers. The surface temperature averages a chilly -200° C. The pictures appear to show a relatively young surface of ice, possibly only 1 km thick in some places. Internal heating of Europa from Jupiter's tidal pull could form an ocean of liquid water beneath the surface. More recently, Ganymede and Callisto are believed to be ocean-bearing Jovian moons based on magnetometer measurements from the Galileo spacecraft. If liquid water exists, life may also. NASA plans to send an orbiting spacecraft to Europa to measure the thickness of the ice and to detect if an underlying liquid ocean exists. This mission would precede the proposed Europa Sample Return mission, which includes dispatching an autonomous submarine-like vehicle that could melt through the ice and explore the undersea realm. Because of the large energy requirements typical of these ambitious solar system science missions, use of chemical rockets results in interplanetary spacecraft that are prohibitive in terms of Initial Mass in Low- Earth Orbit (IMLEO) and cost. For example, using chemical rockets to return samples from Europa appears to be technically impractical, as it would require large delta V and launch vehicle capabilities. On the other hand, use of nuclear thermal rockets will significantly reduce IMLEO and, subsequently, costs. Moreover, nuclear thermal rockets can utilize extraterrestrial resources as propellants, an option not practical with chemical rockets. This "refueling" capability would enable nuclear rockets to carry out very high-energy missions, such as the return of large amounts of extraterrestrial material to Earth. The Europa missions considered in this proposal will be restricted to starting from LEO only after being placed in a stable orbit by a launch vehicle. This simplifies and eases the safety issues and mitigates political concerns. High propulsive efficiency of the MITEE engine yields the benefits of reduced transit time and a smaller launch vehicle.
Massive Sorghum Collection Genotyped with SSR Markers to Enhance Use of Global Genetic Resources
Bouchet, Sophie; Chantereau, Jacques; Deu, Monique; Gardes, Laetitia; Noyer, Jean-Louis; Rami, Jean-François; Rivallan, Ronan; Li, Yu; Lu, Ping; Wang, Tianyu; Folkertsma, Rolf T.; Arnaud, Elizabeth; Upadhyaya, Hari D.; Glaszmann, Jean-Christophe; Hash, C. Thomas
2013-01-01
Large ex situ collections require approaches for sampling manageable amounts of germplasm for in-depth characterization and use. We present here a large diversity survey in sorghum with 3367 accessions and 41 reference nuclear SSR markers. Of 19 alleles on average per locus, the largest numbers of alleles were concentrated in central and eastern Africa. Cultivated sorghum appeared structured according to geographic regions and race within region. A total of 13 groups of variable size were distinguished. The peripheral groups in western Africa, southern Africa and eastern Asia were the most homogeneous and clearly differentiated. Except for Kafir, there was little correspondence between races and marker-based groups. Bicolor, Caudatum, Durra and Guinea types were each dispersed in three groups or more. Races should therefore better be referred to as morphotypes. Wild and weedy accessions were very diverse and scattered among cultivated samples, reinforcing the idea that large gene-flow exists between the different compartments. Our study provides an entry to global sorghum germplasm collections. Our reference marker kit can serve to aggregate additional studies and enhance international collaboration. We propose a core reference set in order to facilitate integrated phenotyping experiments towards refined functional understanding of sorghum diversity. PMID:23565161
High-efficiency wavefunction updates for large scale Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Kent, Paul; McDaniel, Tyler; Li, Ying Wai; D'Azevedo, Ed
Within ab intio Quantum Monte Carlo (QMC) simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunctions. The evaluation of each Monte Carlo move requires finding the determinant of a dense matrix, which is traditionally iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. For calculations with thousands of electrons, this operation dominates the execution profile. We propose a novel rank- k delayed update scheme. This strategy enables probability evaluation for multiple successive Monte Carlo moves, with application of accepted moves to the matrices delayed until after a predetermined number of moves, k. Accepted events grouped in this manner are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency. This procedure does not change the underlying Monte Carlo sampling or the sampling efficiency. For large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude speedups can be obtained on both multi-core CPU and on GPUs, making this algorithm highly advantageous for current petascale and future exascale computations.
Large-Scale Biomonitoring of Remote and Threatened Ecosystems via High-Throughput Sequencing
Gibson, Joel F.; Shokralla, Shadi; Curry, Colin; Baird, Donald J.; Monk, Wendy A.; King, Ian; Hajibabaei, Mehrdad
2015-01-01
Biodiversity metrics are critical for assessment and monitoring of ecosystems threatened by anthropogenic stressors. Existing sorting and identification methods are too expensive and labour-intensive to be scaled up to meet management needs. Alternately, a high-throughput DNA sequencing approach could be used to determine biodiversity metrics from bulk environmental samples collected as part of a large-scale biomonitoring program. Here we show that both morphological and DNA sequence-based analyses are suitable for recovery of individual taxonomic richness, estimation of proportional abundance, and calculation of biodiversity metrics using a set of 24 benthic samples collected in the Peace-Athabasca Delta region of Canada. The high-throughput sequencing approach was able to recover all metrics with a higher degree of taxonomic resolution than morphological analysis. The reduced cost and increased capacity of DNA sequence-based approaches will finally allow environmental monitoring programs to operate at the geographical and temporal scale required by industrial and regulatory end-users. PMID:26488407
Housworth, E A; Martins, E P
2001-01-01
Statistical randomization tests in evolutionary biology often require a set of random, computer-generated trees. For example, earlier studies have shown how large numbers of computer-generated trees can be used to conduct phylogenetic comparative analyses even when the phylogeny is uncertain or unknown. These methods were limited, however, in that (in the absence of molecular sequence or other data) they allowed users to assume that no phylogenetic information was available or that all possible trees were known. Intermediate situations where only a taxonomy or other limited phylogenetic information (e.g., polytomies) are available are technically more difficult. The current study describes a procedure for generating random samples of phylogenies while incorporating limited phylogenetic information (e.g., four taxa belong together in a subclade). The procedure can be used to conduct comparative analyses when the phylogeny is only partially resolved or can be used in other randomization tests in which large numbers of possible phylogenies are needed.
Autonomous learning in gesture recognition by using lobe component analysis
NASA Astrophysics Data System (ADS)
Lu, Jian; Weng, Juyang
2007-02-01
Gesture recognition is a new human-machine interface method implemented by pattern recognition(PR).In order to assure robot safety when gesture is used in robot control, it is required to implement the interface reliably and accurately. Similar with other PR applications, 1) feature selection (or model establishment) and 2) training from samples, affect the performance of gesture recognition largely. For 1), a simple model with 6 feature points at shoulders, elbows, and hands, is established. The gestures to be recognized are restricted to still arm gestures, and the movement of arms is not considered. These restrictions are to reduce the misrecognition, but are not so unreasonable. For 2), a new biological network method, called lobe component analysis(LCA), is used in unsupervised learning. Lobe components, corresponding to high-concentrations in probability of the neuronal input, are orientation selective cells follow Hebbian rule and lateral inhibition. Due to the advantage of LCA method for balanced learning between global and local features, large amount of samples can be used in learning efficiently.
Rapid underway profiling of water quality in Queensland estuaries.
Hodge, Jonathan; Longstaff, Ben; Steven, Andy; Thornton, Phillip; Ellis, Peter; McKelvie, Ian
2005-01-01
We present an overview of a portable underway water quality monitoring system (RUM-Rapid Underway Monitoring), developed by integrating several off-the-shelf water quality instruments to provide rapid, comprehensive, and spatially referenced 'snapshots' of water quality conditions. We demonstrate the utility of the system from studies in the Northern Great Barrier Reef (Daintree River) and the Moreton Bay region. The Brisbane dataset highlights RUM's utility in characterising plumes as well as its ability to identify the smaller scale structure of large areas. RUM is shown to be particularly useful when measuring indicators with large small-scale variability such as turbidity and chlorophyll-a. Additionally, the Daintree dataset shows the ability to integrate other technologies, resulting in a more comprehensive analysis, whilst sampling offshore highlights some of the analytical issues required for sampling low concentration data. RUM is a low cost, highly flexible solution that can be modified for use in any water type, on most vessels and is only limited by the available monitoring technologies.
Ehama, Makoto; Hashihama, Fuminori; Kinouchi, Shinko; Kanda, Jota; Saito, Hiroaki
2016-06-01
Determining the total particulate phosphorus (TPP) and particulate inorganic phosphorus (PIP) in oligotrophic oceanic water generally requires the filtration of a large amount of water sample. This paper describes methods that require small filtration volumes for determining the TPP and PIP concentrations. The methods were devised by validating or improving conventional sample processing and by applying highly sensitive liquid waveguide spectrophotometry to the measurements of oxidized or acid-extracted phosphate from TPP and PIP, respectively. The oxidation of TPP was performed by a chemical wet oxidation method using 3% potassium persulfate. The acid extraction of PIP was initially carried out based on the conventional extraction methodology, which requires 1M HCl, followed by the procedure for decreasing acidity. While the conventional procedure for acid removal requires a ten-fold dilution of the 1M HCl extract with purified water, the improved procedure proposed in this study uses 8M NaOH solution for neutralizing 1M HCl extract in order to reduce the dilution effect. An experiment for comparing the absorbances of the phosphate standard dissolved in 0.1M HCl and of that dissolved in a neutralized solution [1M HCl: 8M NaOH=8:1 (v:v)] exhibited a higher absorbance in the neutralized solution. This indicated that the improved procedure completely removed the acid effect, which reduces the sensitivity of the phosphate measurement. Application to an ultraoligotrophic water sample showed that the TPP concentration in a 1075mL-filtered sample was 8.4nM with a coefficient of variation (CV) of 4.3% and the PIP concentration in a 2300mL-filtered sample was 1.3nM with a CV of 6.1%. Based on the detection limit (3nM) of the sensitive phosphate measurement and the ambient TPP and PIP concentrations of the ultraoligotrophic water, the minimum filtration volumes required for the detection of TPP and PIP were estimated to be 15 and 52mL, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Nishimura, Mitsugu; Baker, Earl W.
1987-06-01
Five recent sediment samples from a variety of North American continental shelves were analyzed for fatty acids (FAs) in the solvent-extractable (SOLEX) lipids as well as four types of non-solvent extractable (NONEX) lipids. The NONEX lipids were operationally defined by the succession of extraction procedure required to recover them. The complete procedure included (i) very mild acid treatment, (ii) HF digestion and (iii) saponification of the sediment residue following exhaustive solvent extraction. The distribution pattern and various compositional parameters of SOLEX FAs in the five sediments were divided into three different groups, indicating the difference of biological sources and also diagenetic factors and processes among the three groups of samples. Nevertheless, the compositions of the corresponding NONEX FAs after acid treatment were surprisingly very similar. This was also true for the remaining NONEX FA groups in the five sediment samples. The findings implied that most of the NONEX FAs reported here are derived directly from living organisms. It is also concluded that a large part of NONEX FAs are much more resistant to biodegradation than we have thought, so that they can form the large percentage of total lipids with increasing depth of water and sediments.
An elutriation apparatus for assessing settleability of combined sewer overflows (CSOs).
Marsalek, J; Krishnappan, B G; Exall, K; Rochfort, Q; Stephens, R P
2006-01-01
An elutriation apparatus was proposed for testing the settleability of combined sewer outflows (CSOs) and applied to 12 CSO samples. In this apparatus, solids settling is measured under dynamic conditions created by flow through a series of settling chambers of varying diameters and upward flow velocities. Such a procedure reproduces better turbulent settling in CSO tanks than the conventional settling columns, and facilitates testing coagulant additions under dynamic conditions. Among the limitations, one could name the relatively large size of the apparatus and samples (60 L), and inadequate handling of floatables. Settleability results obtained for the elutriation apparatus and a conventional settling column indicate large inter-event variation in CSO settleability. Under such circumstances, settling tanks need to be designed for "average" conditions and, within some limits, the differences in test results produced by various settleability testing apparatuses and procedures may be acceptable. Further development of the elutriation apparatus is under way, focusing on reducing flow velocities in the tubing connecting settling chambers and reducing the number of settling chambers employed. The first measure would reduce the risk of floc breakage in the connecting tubing and the second one would reduce the required sample size.
H I-SELECTED GALAXIES IN THE SLOAN DIGITAL SKY SURVEY. II. THE COLORS OF GAS-RICH GALAXIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, Andrew A.; Garcia-Appadoo, Diego A.; Dalcanton, Julianne J.
2009-09-15
We utilize color information for an H I-selected sample of 195 galaxies to explore the star formation histories and physical conditions that produce the observed colors. We show that the H I selection creates a significant offset toward bluer colors that can be explained by enhanced recent bursts of star formation. There is also no obvious color bimodality, because the H I selection restricts the sample to bluer, actively star-forming systems, diminishing the importance of the red sequence. Rising star formation rates are still required to explain the colors of galaxies bluer than g - r< 0.3. We also demonstratemore » that the colors of the bluest galaxies in our sample are dominated by emission lines and that stellar population synthesis models alone (without emission lines) are not adequate for reproducing many of the galaxy colors. These emission lines produce large changes in the r - i colors but leave the g - r color largely unchanged. In addition, we find an increase in the dispersion of galaxy colors at low masses that may be the result of a change in the star formation process in low-mass galaxies.« less
Schüttler, C; Buschhüter, N; Döllinger, C; Ebert, L; Hummel, M; Linde, J; Prokosch, H-U; Proynova, R; Lablans, M
2018-04-24
The large number of biobanks within Germany results in a high degree of heterogeneity with regard to the IT components used at the respective locations. Within the German Biobank Alliance (GBA), 13 biobanks implement harmonized processes for the provision of biomaterial and accompanying data. The networking of the individual biobanks and the associated harmonisation of the IT infrastructure should facilitate access to biomaterial and related clinical data. For this purpose, the relevant target groups were first identified in order to determine their requirements for IT solutions to be developed in a workshop. Of the seven identified interest groups, three were initially invited to a first round of discussions. The stakeholder input expressed resulted in a catalogue of requirements with regard to IT support for (i) a sample and data request, (ii) the handling of patient consent and inclusion, and (iii) the subsequent evaluation of the sample and data request. The next step is to design the IT solutions as prototypes based on these requirements. In parallel, further user groups are being surveyed in order to be able to further concretise the specifications for development.
Recent development in software and automation tools for high-throughput discovery bioanalysis.
Shou, Wilson Z; Zhang, Jun
2012-05-01
Bioanalysis with LC-MS/MS has been established as the method of choice for quantitative determination of drug candidates in biological matrices in drug discovery and development. The LC-MS/MS bioanalytical support for drug discovery, especially for early discovery, often requires high-throughput (HT) analysis of large numbers of samples (hundreds to thousands per day) generated from many structurally diverse compounds (tens to hundreds per day) with a very quick turnaround time, in order to provide important activity and liability data to move discovery projects forward. Another important consideration for discovery bioanalysis is its fit-for-purpose quality requirement depending on the particular experiments being conducted at this stage, and it is usually not as stringent as those required in bioanalysis supporting drug development. These aforementioned attributes of HT discovery bioanalysis made it an ideal candidate for using software and automation tools to eliminate manual steps, remove bottlenecks, improve efficiency and reduce turnaround time while maintaining adequate quality. In this article we will review various recent developments that facilitate automation of individual bioanalytical procedures, such as sample preparation, MS/MS method development, sample analysis and data review, as well as fully integrated software tools that manage the entire bioanalytical workflow in HT discovery bioanalysis. In addition, software tools supporting the emerging high-resolution accurate MS bioanalytical approach are also discussed.
Power analysis to detect treatment effects in longitudinal clinical trials for Alzheimer's disease.
Huang, Zhiyue; Muniz-Terrera, Graciela; Tom, Brian D M
2017-09-01
Assessing cognitive and functional changes at the early stage of Alzheimer's disease (AD) and detecting treatment effects in clinical trials for early AD are challenging. Under the assumption that transformed versions of the Mini-Mental State Examination, the Clinical Dementia Rating Scale-Sum of Boxes, and the Alzheimer's Disease Assessment Scale-Cognitive Subscale tests'/components' scores are from a multivariate linear mixed-effects model, we calculated the sample sizes required to detect treatment effects on the annual rates of change in these three components in clinical trials for participants with mild cognitive impairment. Our results suggest that a large number of participants would be required to detect a clinically meaningful treatment effect in a population with preclinical or prodromal Alzheimer's disease. We found that the transformed Mini-Mental State Examination is more sensitive for detecting treatment effects in early AD than the transformed Clinical Dementia Rating Scale-Sum of Boxes and Alzheimer's Disease Assessment Scale-Cognitive Subscale. The use of optimal weights to construct powerful test statistics or sensitive composite scores/endpoints can reduce the required sample sizes needed for clinical trials. Consideration of the multivariate/joint distribution of components' scores rather than the distribution of a single composite score when designing clinical trials can lead to an increase in power and reduced sample sizes for detecting treatment effects in clinical trials for early AD.
Orth, Patrick; Zurakowski, David; Alini, Mauro; Cucchiarini, Magali
2013-01-01
Advanced tissue engineering approaches for articular cartilage repair in the knee joint rely on translational animal models. In these investigations, cartilage defects may be established either in one joint (unilateral design) or in both joints of the same animal (bilateral design). We hypothesized that a lower intraindividual variability following the bilateral strategy would reduce the number of required joints. Standardized osteochondral defects were created in the trochlear groove of 18 rabbits. In 12 animals, defects were produced unilaterally (unilateral design; n=12 defects), while defects were created bilaterally in 6 animals (bilateral design; n=12 defects). After 3 weeks, osteochondral repair was evaluated histologically applying an established grading system. Based on intra- and interindividual variabilities, required sample sizes for the detection of discrete differences in the histological score were determined for both study designs (α=0.05, β=0.20). Coefficients of variation (%CV) of the total histological score values were 1.9-fold increased following the unilateral design when compared with the bilateral approach (26 versus 14%CV). The resulting numbers of joints needed to treat were always higher for the unilateral design, resulting in an up to 3.9-fold increase in the required number of experimental animals. This effect was most pronounced for the detection of small-effect sizes and estimating large standard deviations. The data underline the possible benefit of bilateral study designs for the decrease of sample size requirements for certain investigations in articular cartilage research. These findings might also be transferred to other scoring systems, defect types, or translational animal models in the field of cartilage tissue engineering. PMID:23510128
Ground-water resources of Monmouth County, New Jersey
Jablonski, Leo A.
1968-01-01
Aquifers in the Raritan and Magothy Formations and the Englishtown Formation supplied 76 percent of the ground water used in 1958. These aquifers, in conjunction with the Wenonah Formation and Mount Laurel Sand of Late Cretaceous age, are capable of providing relatively large yields to wells. The average yield of 63 large-diameter wells tapping these aquifers is 580 gpm, at depths randing from 100 to 1,140 feet. In general, the concentrations of chemical constituents in water from the aquifers would not restrict the use of the water for most purposes. High concentrations of iron do occur and require treatment. The concentrations of dissolved solids in 39 to 41 samples were 160 ppm (parts per million) or less.
Puls, Robert W.; Eychaner, James H.; Powell, Robert M.
1996-01-01
Investigations at Pinal Creek, Arizona, evaluated routine sampling procedures for determination of aqueous inorganic geochemistry and assessment of contaminant transport by colloidal mobility. Sampling variables included pump type and flow rate, collection under air or nitrogen, and filter pore diameter. During well purging and sample collection, suspended particle size and number as well as dissolved oxygen, temperature, specific conductance, pH, and redox potential were monitored. Laboratory analyses of both unfiltered samples and the filtrates were performed by inductively coupled argon plasma, atomic absorption with graphite furnace, and ion chromatography. Scanning electron microscopy with Energy Dispersive X-ray was also used for analysis of filter particulates. Suspended particle counts consistently required approximately twice as long as the other field-monitored indicators to stabilize. High-flow-rate pumps entrained normally nonmobile particles. Difference in elemental concentrations using different filter-pore sizes were generally not large with only two wells having differences greater than 10 percent in most wells. Similar differences (>10%) were observed for some wells when samples were collected under nitrogen rather than in air. Fe2+/Fe3+ ratios for air-collected samples were smaller than for samples collected under a nitrogen atmosphere, reflecting sampling-induced oxidation.
A Self-Directed Method for Cell-Type Identification and Separation of Gene Expression Microarrays
Zuckerman, Neta S.; Noam, Yair; Goldsmith, Andrea J.; Lee, Peter P.
2013-01-01
Gene expression analysis is generally performed on heterogeneous tissue samples consisting of multiple cell types. Current methods developed to separate heterogeneous gene expression rely on prior knowledge of the cell-type composition and/or signatures - these are not available in most public datasets. We present a novel method to identify the cell-type composition, signatures and proportions per sample without need for a-priori information. The method was successfully tested on controlled and semi-controlled datasets and performed as accurately as current methods that do require additional information. As such, this method enables the analysis of cell-type specific gene expression using existing large pools of publically available microarray datasets. PMID:23990767
Ito, Shinya; Tsukada, Katsuo
2002-01-11
An evaluation of the feasibility of liquid chromatography-mass spectrometry (LC-MS) with atmospheric pressure ionization was made for quantitation of four diarrhetic shellfish poisoning toxins, okadaic acid, dinophysistoxin-1, pectenotoxin-6 and yessotoxin in scallops. When LC-MS was applied to the analysis of scallop extracts, large signal suppressions were observed due to coeluting substances from the column. To compensate for these matrix signal suppressions, the standard addition method was applied. First, the sample was analyzed and then the sample involving the addition of calibration standards is analyzed. Although this method requires two LC-MS runs per analysis, effective correction of quantitative errors was found.
Wright, John J; Salvadori, Enrico; Bridges, Hannah R; Hirst, Judy; Roessler, Maxie M
2016-09-01
EPR-based potentiometric titrations are a well-established method for determining the reduction potentials of cofactors in large and complex proteins with at least one EPR-active state. However, such titrations require large amounts of protein. Here, we report a new method that requires an order of magnitude less protein than previously described methods, and that provides EPR samples suitable for measurements at both X- and Q-band microwave frequencies. We demonstrate our method by determining the reduction potential of the terminal [4Fe-4S] cluster (N2) in the intramolecular electron-transfer relay in mammalian respiratory complex I. The value determined by our method, E m7 =-158mV, is precise, reproducible, and consistent with previously reported values. Our small-volume potentiometric titration method will facilitate detailed investigations of EPR-active centres in non-abundant and refractory proteins that can only be prepared in small quantities. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Development of reaction-sintered SiC mirror for space-borne optics
NASA Astrophysics Data System (ADS)
Yui, Yukari Y.; Kimura, Toshiyoshi; Tange, Yoshio
2017-11-01
We are developing high-strength reaction-sintered silicon carbide (RS-SiC) mirror as one of the new promising candidates for large-diameter space-borne optics. In order to observe earth surface or atmosphere with high spatial resolution from geostationary orbit, larger diameter primary mirrors of 1-2 m are required. One of the difficult problems to be solved to realize such optical system is to obtain as flat mirror surface as possible that ensures imaging performance in infrared - visible - ultraviolet wavelength region. This means that homogeneous nano-order surface flatness/roughness is required for the mirror. The high-strength RS-SiC developed and manufactured by TOSHIBA is one of the most excellent and feasible candidates for such purpose. Small RS-SiC plane sample mirrors have been manufactured and basic physical parameters and optical performances of them have been measured. We show the current state of the art of the RS-SiC mirror and the feasibility of a large-diameter RS-SiC mirror for space-borne optics.
2009-10-01
parameters for a large number of species. These authors provide many sample calculations with the JCZS database incorporated in CHEETAH 2.0, including...FORM (highest classification of Title, Abstract, Keywords) DOCUMENT CONTROL DATA (Security classification of title, body of abstract and...CLASSIFICATION OF FORM 13. ABSTRACT (a brief and factual summary of the document. It may also appear elsewhere in the body of the document itself
A Summary of the Naval Postgraduate School Research Program and Recent Publications
1990-09-01
principles to divide the spectrum of MATLAB computer program on a 386-type a wide-band spread-spectrum signal into sub- computer. Because of the high rf...original in time and a large data sample was required. An signal. Effects due the fiber optic pickup array extended version of MATLAB that allows and...application, such as orbital mechanics and weather prediction. Professor Gragg has also developed numerous MATLAB programs for linear programming problems
Optimization of Composting for Explosives Contaminated Soil
1991-09-30
undesirable and essentially economically unfeasible for the remediation of small sites due to the large expenditures required for the mobilization and...mm, 5 micron. * Detector: UV absorbance at 250 nm. " Mobile phase: 52% methanol/48% water. " Flow rate: 1.5 mL/min. * Injection volume: 50 1&L. The...and lOx calibration standards. 4-21 57SC/2hif 12/02/91 Samples were diluted with mobile phase as necessary to bring target analytes into the
United States planetary rover status: 1989
NASA Technical Reports Server (NTRS)
Pivirotto, Donna L. S.; Dias, William C.
1990-01-01
A spectrum of concepts for planetary rovers and rover missions, is covered. Rovers studied range from tiny micro rovers to large and highly automated vehicles capable of traveling hundreds of kilometers and performing complex tasks. Rover concepts are addressed both for the Moon and Mars, including a Lunar/Mars common rover capable of supporting either program with relatively small modifications. Mission requirements considered include both Science and Human Exploration. Studies include a range of autonomy in rovers, from interactive teleoperated systems to those requiring and onboard System Executive making very high level decisions. Both high and low technology rover options are addressed. Subsystems are described for a representative selection of these rovers, including: Mobility, Sample Acquisition, Science, Vehicle Control, Thermal Control, Local Navigation, Computation and Communications. System descriptions of rover concepts include diagrams, technology levels, system characteristics, and performance measurement in terms of distance covered, samples collected, and area surveyed for specific representative missions. Rover development schedules and costs are addressed for Lunar and Mars exploration initiatives.
Martin, Brigitte E.; Jia, Kun; Sun, Hailiang; Ye, Jianqiang; Hall, Crystal; Ware, Daphne; Wan, Xiu-Feng
2016-01-01
Identification of antigenic variants is the key to a successful influenza vaccination program. The empirical serological methods to determine influenza antigenic properties require viral propagation. Here a novel quantitative PCR-based antigenic characterization method using polyclonal antibody and proximity ligation assays, or so-called polyPLA, was developed and validated. This method can detect a viral titer that is less than 1000 TCID50/mL. Not only can this method differentiate between different HA subtypes of influenza viruses but also effectively identify antigenic drift events within the same HA subtype of influenza viruses. Applications in H3N2 seasonal influenza data showed that the results from this novel method are consistent with those from the conventional serological assays. This method is not limited to the detection of antigenic variants in influenza but also other pathogens. It has the potential to be applied through a large-scale platform in disease surveillance requiring minimal biosafety and directly using clinical samples. PMID:25546251
Diez-Martin, J; Moreno-Ortega, M; Bagney, A; Rodriguez-Jimenez, R; Padilla-Torres, D; Sanchez-Morla, E M; Santos, J L; Palomo, T; Jimenez-Arriero, M A
2014-01-01
To assess insight in a large sample of patients with schizophrenia and to study its relationship with set shifting as an executive function. The insight of a sample of 161 clinically stable, community-dwelling patients with schizophrenia was evaluated by means of the Scale to Assess Unawareness of Mental Disorder (SUMD). Set shifting was measured using the Trail-Making Test time required to complete part B minus the time required to complete part A (TMT B-A). Linear regression analyses were performed to investigate the relationships of TMT B-A with different dimensions of general insight. Regression analyses revealed a significant association between TMT B-A and two of the SUMD general components: 'awareness of mental disorder' and 'awareness of the efficacy of treatment'. The 'awareness of social consequences' component was not significantly associated with set shifting. Our results show a significant relation between set shifting and insight, but not in the same manner for the different components of the SUMD general score. Copyright © 2013 S. Karger AG, Basel.
Non-Contact Temperature Requirements (NCTM) for drop and bubble physics
NASA Technical Reports Server (NTRS)
Hmelo, Anthony B.; Wang, Taylor G.
1989-01-01
Many of the materials research experiments to be conducted in the Space Processing program require a non-contaminating method of manipulating and controlling weightless molten materials. In these experiments, the melt is positioned and formed within a container without physically contacting the container's wall. An acoustic method, which was developed by Professor Taylor G. Wang before coming to Vanderbilt University from the Jet Propulsion Laboratory, has demonstrated the capability of positioning and manipulating room temperature samples. This was accomplished in an earth-based laboratory with a zero-gravity environment of short duration. However, many important facets of high temperature containerless processing technology have not been established yet, nor can they be established from the room temperature studies, because the details of the interaction between an acoustic field an a molten sample are largely unknown. Drop dynamics, bubble dynamics, coalescence behavior of drops and bubbles, electromagnetic and acoustic levitation methods applied to molten metals, and thermal streaming are among the topics discussed.
Polyphenols excreted in urine as biomarkers of total polyphenol intake.
Medina-Remón, Alexander; Tresserra-Rimbau, Anna; Arranz, Sara; Estruch, Ramón; Lamuela-Raventos, Rosa M
2012-11-01
Nutritional biomarkers have several advantages in acquiring data for epidemiological and clinical studies over traditional dietary assessment tools, such as food frequency questionnaires. While food frequency questionnaires constitute a subjective methodology, biomarkers can provide a less biased and more accurate measure of specific nutritional intake. A precise estimation of polyphenol consumption requires blood or urine sample biomarkers, although their association is usually highly complex. This article reviews recent research on urinary polyphenols as potential biomarkers of polyphenol intake, focusing on clinical and epidemiological studies. We also report a potentially useful methodology to assess total polyphenols in urine samples, which allows a rapid, simultaneous determination of total phenols in a large number of samples. This methodology can be applied in studies evaluating the utility of urinary polyphenols as markers of polyphenol intake, bioavailability and accumulation in the body.
Mass decomposition of galaxies using DECA software package
NASA Astrophysics Data System (ADS)
Mosenkov, A. V.
2014-01-01
The new DECA software package, which is designed to perform photometric analysis of the images of disk and elliptical galaxies having a regular structure, is presented. DECA is written in Python interpreted language and combines the capabilities of several widely used packages for astronomical data processing such as IRAF, SExtractor, and the GALFIT code used to perform two-dimensional decomposition of galaxy images into several photometric components (bulge+disk). DECA has the advantage that it can be applied to large samples of galaxies with different orientations with respect to the line of sight (including edge-on galaxies) and requires minimum human intervention. Examples of using the package to study a sample of simulated galaxy images and a sample of real objects are shown to demonstrate that DECA can be a reliable tool for the study of the structure of galaxies.
Paquet, Victor; Joseph, Caroline; D'Souza, Clive
2012-01-01
Anthropometric studies typically require a large number of individuals that are selected in a manner so that demographic characteristics that impact body size and function are proportionally representative of a user population. This sampling approach does not allow for an efficient characterization of the distribution of body sizes and functions of sub-groups within a population and the demographic characteristics of user populations can often change with time, limiting the application of the anthropometric data in design. The objective of this study is to demonstrate how demographically representative user populations can be developed from samples that are not proportionally representative in order to improve the application of anthropometric data in design. An engineering anthropometry problem of door width and clear floor space width is used to illustrate the value of the approach.
The costs and effectiveness of large Phase III pre-licensure vaccine clinical trials.
Black, Steven
2015-01-01
Prior to the 1980s, most vaccines were licensed based upon safety and effectiveness studies in several hundred individuals. Beginning with the evaluation of Haemophilus influenzae type b conjugate vaccines, much larger pre-licensure trials became common. The pre-licensure trial for Haemophilus influenzae oligosaccharide conjugate vaccine had more than 60,000 children and that of the seven-valent pneumococcal conjugate vaccine included almost 38,000 children. Although trial sizes for both of these studies were driven by the sample size required to demonstrate efficacy, the sample size requirements for safety evaluations of other vaccines have subsequently increased. With the demonstration of an increased risk of intussusception following the Rotashield brand rotavirus vaccine, this trend has continued. However, routinely requiring safety studies of 20,000-50,000 or more participants has two major downsides. First, the cost of performing large safety trials routinely prior to licensure of a vaccine is very large, with some estimates as high at US$200 million euros for one vaccine. This high financial cost engenders an opportunity cost whereby the number of vaccines that a company is willing or able to develop to meet public health needs becomes limited by this financial barrier. The second downside is that in the pre-licensure setting, such studies are very time consuming and delay the availability of a beneficial vaccine substantially. One might argue that in some situations, this financial commitment is warranted such as for evaluations of the risk of intussusception following newer rotavirus vaccines. However, it must be noted that while an increased risk of intussusception was not identified in large pre-licensure studies, in post marketing evaluations an increased risk of this outcome has been identified. Thus, even the extensive pre-licensure evaluations conducted did not identify an associated risk. The limitations of large Phase III trials have also been demonstrated in efficacy trials. Notably, pre-licensure trials of pneumococcal conjugate severely underestimated their true effect and cost-effectiveness. In fact, in discussions prior to vaccine introduction in the USA for PCV7, the vaccine was said to be not cost-effective and some counseled against its introduction. In reality, following introduction, PCV7 has been shown to be highly cost-effective. In the last decade, new methods have been identified using large linked databases such as the Vaccine Safety Datalink in the USA that allow identification of an increased risk of an event within a few months of vaccine introduction and that can screen for unanticipated very rare events as well. In addition, the availability of electronic medical records and hospital discharge data in many settings allows for accurate assessment of vaccine effectiveness. Given the high financial and opportunity cost of requiring large pre-licensure safety studies, consideration could be given to 'conditional licensure' of vaccines whose delivery system is well characterized in a setting where sophisticated pharmacovigilance systems exist on the condition that such licensure would incorporate a requirement for rapid cycle and other real-time evaluations of safety and effectiveness following introduction. This would actually allow for a more complete and timely evaluation of vaccines, lower the financial barrier to development of new vaccines and thus allow a broader portfolio of vaccines to be developed and successfully introduced.
Chen, Changjun
2016-03-31
The free energy landscape is the most important information in the study of the reaction mechanisms of the molecules. However, it is difficult to calculate. In a large collective variable space, a molecule must take a long time to obtain the sufficient sampling during the simulation. To save the calculation quantity, decreasing the sampling region and constructing the local free energy landscape is required in practice. However, the restricted region in the collective variable space may have an irregular shape. Simply restricting one or more collective variables of the molecule cannot satisfy the requirement. In this paper, we propose a modified tomographic method to perform the simulation. First, it divides the restricted region by some hyperplanes and connects the centers of hyperplanes together by a curve. Second, it forces the molecule to sample on the curve and the hyperplanes in the simulation and calculates the free energy data on them. Finally, all the free energy data are combined together to form the local free energy landscape. Without consideration of the area outside the restricted region, this free energy calculation can be more efficient. By this method, one can further optimize the path quickly in the collective variable space.
NASA Technical Reports Server (NTRS)
Sherry, Lance; Feary, Michael; Polson, Peter; Fennell, Karl
2003-01-01
The Flight Management Computer (FMC) and its interface, the Multi-function Control and Display Unit (MCDU) have been identified by researchers and airlines as difficult to train and use. Specifically, airline pilots have described the "drinking from the fire-hose" effect during training. Previous research has identified memorized action sequences as a major factor in a user s ability to learn and operate complex devices. This paper discusses the use of a method to examine the quantity of memorized action sequences required to perform a sample of 102 tasks, using features of the Boeing 777 Flight Management Computer Interface. The analysis identified a large number of memorized action sequences that must be learned during training and then recalled during line operations. Seventy-five percent of the tasks examined require recall of at least one memorized action sequence. Forty-five percent of the tasks require recall of a memorized action sequence and occur infrequently. The large number of memorized action sequences may provide an explanation for the difficulties in training and usage of the automation. Based on these findings, implications for training and the design of new user-interfaces are discussed.
Elmer-Dixon, Margaret M; Bowler, Bruce E
2018-05-19
A novel approach to quantify mixed lipid systems is described. Traditional approaches to lipid vesicle quantification are time consuming, require large amounts of material and are destructive. We extend our recently described method for quantification of pure lipid systems to mixed lipid systems. The method only requires a UV-Vis spectrometer and does not destroy sample. Mie scattering data from absorbance measurements are used as input into a Matlab program to calculate the total vesicle concentration and the concentrations of each lipid in the mixed lipid system. The technique is fast and accurate, which is essential for analytical lipid binding experiments. Copyright © 2018. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Barney, Rich; Bauman, Jill; Feinberg, Lee; Mcleese, Dan; Singh, Upendra
2011-01-01
In August 2010, the NASA Office of Chief Technologist (OCT) commissioned an assessment of 15 different technology areas of importance to the future of NASA. Technology assessment #8 (TA8) was Science Instruments, Observatories and Sensor Systems (SIOSS). SIOSS assess the needs for optical technology ranging from detectors to lasers, x-ray mirrors to microwave antenna, in-situ spectrographs for on-surface planetary sample characterization to large space telescopes. The needs assessment looked across the entirety of NASA and not just the Science Mission Directorate. This paper reviews the optical manufacturing and testing technologies identified by SIOSS which require development in order to enable future NASA high priority missions.
3D hyperpolarized C-13 EPI with calibrationless parallel imaging
NASA Astrophysics Data System (ADS)
Gordon, Jeremy W.; Hansen, Rie B.; Shin, Peter J.; Feng, Yesu; Vigneron, Daniel B.; Larson, Peder E. Z.
2018-04-01
With the translation of metabolic MRI with hyperpolarized 13C agents into the clinic, imaging approaches will require large volumetric FOVs to support clinical applications. Parallel imaging techniques will be crucial to increasing volumetric scan coverage while minimizing RF requirements and temporal resolution. Calibrationless parallel imaging approaches are well-suited for this application because they eliminate the need to acquire coil profile maps or auto-calibration data. In this work, we explored the utility of a calibrationless parallel imaging method (SAKE) and corresponding sampling strategies to accelerate and undersample hyperpolarized 13C data using 3D blipped EPI acquisitions and multichannel receive coils, and demonstrated its application in a human study of [1-13C]pyruvate metabolism.
Multidimensional Normalization to Minimize Plate Effects of Suspension Bead Array Data.
Hong, Mun-Gwan; Lee, Woojoo; Nilsson, Peter; Pawitan, Yudi; Schwenk, Jochen M
2016-10-07
Enhanced by the growing number of biobanks, biomarker studies can now be performed with reasonable statistical power by using large sets of samples. Antibody-based proteomics by means of suspension bead arrays offers one attractive approach to analyze serum, plasma, or CSF samples for such studies in microtiter plates. To expand measurements beyond single batches, with either 96 or 384 samples per plate, suitable normalization methods are required to minimize the variation between plates. Here we propose two normalization approaches utilizing MA coordinates. The multidimensional MA (multi-MA) and MA-loess both consider all samples of a microtiter plate per suspension bead array assay and thus do not require any external reference samples. We demonstrate the performance of the two MA normalization methods with data obtained from the analysis of 384 samples including both serum and plasma. Samples were randomized across 96-well sample plates, processed, and analyzed in assay plates, respectively. Using principal component analysis (PCA), we could show that plate-wise clusters found in the first two components were eliminated by multi-MA normalization as compared with other normalization methods. Furthermore, we studied the correlation profiles between random pairs of antibodies and found that both MA normalization methods substantially reduced the inflated correlation introduced by plate effects. Normalization approaches using multi-MA and MA-loess minimized batch effects arising from the analysis of several assay plates with antibody suspension bead arrays. In a simulated biomarker study, multi-MA restored associations lost due to plate effects. Our normalization approaches, which are available as R package MDimNormn, could also be useful in studies using other types of high-throughput assay data.
Campi-Azevedo, Ana Carolina; Peruhype-Magalhães, Vanessa; Coelho-Dos-Reis, Jordana Grazziela; Costa-Pereira, Christiane; Yamamura, Anna Yoshida; Lima, Sheila Maria Barbosa de; Simões, Marisol; Campos, Fernanda Magalhães Freire; de Castro Zacche Tonini, Aline; Lemos, Elenice Moreira; Brum, Ricardo Cristiano; de Noronha, Tatiana Guimarães; Freire, Marcos Silva; Maia, Maria de Lourdes Sousa; Camacho, Luiz Antônio Bastos; Rios, Maria; Chancey, Caren; Romano, Alessandro; Domingues, Carla Magda; Teixeira-Carvalho, Andréa; Martins-Filho, Olindo Assis
2017-09-01
Technological innovations in vaccinology have recently contributed to bring about novel insights for the vaccine-induced immune response. While the current protocols that use peripheral blood samples may provide abundant data, a range of distinct components of whole blood samples are required and the different anticoagulant systems employed may impair some properties of the biological sample and interfere with functional assays. Although the interference of heparin in functional assays for viral neutralizing antibodies such as the functional plaque-reduction neutralization test (PRNT), considered the gold-standard method to assess and monitor the protective immunity induced by the Yellow fever virus (YFV) vaccine, has been well characterized, the development of pre-analytical treatments is still required for the establishment of optimized protocols. The present study intended to optimize and evaluate the performance of pre-analytical treatment of heparin-collected blood samples with ecteola-cellulose (ECT) to provide accurate measurement of anti-YFV neutralizing antibodies, by PRNT. The study was designed in three steps, including: I. Problem statement; II. Pre-analytical steps; III. Analytical steps. Data confirmed the interference of heparin on PRNT reactivity in a dose-responsive fashion. Distinct sets of conditions for ECT pre-treatment were tested to optimize the heparin removal. The optimized protocol was pre-validated to determine the effectiveness of heparin plasma:ECT treatment to restore the PRNT titers as compared to serum samples. The validation and comparative performance was carried out by using a large range of serum vs heparin plasma:ECT 1:2 paired samples obtained from unvaccinated and 17DD-YFV primary vaccinated subjects. Altogether, the findings support the use of heparin plasma:ECT samples for accurate measurement of anti-YFV neutralizing antibodies. Copyright © 2017 Elsevier B.V. All rights reserved.
Benthic Foraminifera Clumped Isotope Calibration
NASA Astrophysics Data System (ADS)
Piasecki, A.; Marchitto, T. M., Jr.; Bernasconi, S. M.; Grauel, A. L.; Tisserand, A. A.; Meckler, N.
2017-12-01
Due to the widespread spatial and temporal distribution of benthic foraminifera within ocean sediments, they are a commonly used for reconstructing past ocean temperatures and environmental conditions. Many foraminifera-based proxies, however, require calibration schemes that are species specific, which becomes complicated in deep time due to extinct species. Furthermore, calibrations often depend on seawater chemistry being stable and/or constrained, which is not always the case over significant climate state changes like the Eocene Oligocene Transition. Here we study the effect of varying benthic foraminifera species using the clumped isotope proxy for temperature. The benefit of this proxy is that it is independent of seawater chemistry, whereas the downside is that it requires a relatively large sample amounts. Due to recent advancements in sample processing that reduce the sample weight by a factor of 10, clumped isotopes can now be applied to a range paleoceanographic questions. First however, we need to prove that, unlike for other proxies, there are no interspecies differences with clumped isotopes, as is predicted by first principles modeling. We used a range of surface sediment samples covering a temperature range of 1-20°C from the Pacific, Mediterranean, Bahamas, and the Atlantic, and measured the clumped isotope composition of 11 different species of benthic foraminifera. We find that there are indeed no discernible species-specific differences within the sample set. In addition, the samples have the same temperature response to the proxy as inorganic carbonate samples over the same temperature range. As a result, we can now apply this proxy to a wide range of samples and foraminifera species from different ocean basins with different ocean chemistry and be confident that observed signals reflect variations in temperature.
NASA Technical Reports Server (NTRS)
Goyet, Catherine; Davis, Daniel; Peltzer, Edward T.; Brewer, Peter G.
1995-01-01
Large-scale ocean observing programs such as the Joint Global Ocean Flux Study (JGOFS) and the World Ocean Circulation Experiment (WOCE) today, must face the problem of designing an adequate sampling strategy. For ocean chemical variables, the goals and observing technologies are quite different from ocean physical variables (temperature, salinity, pressure). We have recently acquired data on the ocean CO2 properties on WOCE cruises P16c and P17c that are sufficiently dense to test for sampling redundancy. We use linear and quadratic interpolation methods on the sampled field to investigate what is the minimum number of samples required to define the deep ocean total inorganic carbon (TCO2) field within the limits of experimental accuracy (+/- 4 micromol/kg). Within the limits of current measurements, these lines were oversampled in the deep ocean. Should the precision of the measurement be improved, then a denser sampling pattern may be desirable in the future. This approach rationalizes the efficient use of resources for field work and for estimating gridded (TCO2) fields needed to constrain geochemical models.
Rigorous Training of Dogs Leads to High Accuracy in Human Scent Matching-To-Sample Performance
Marchal, Sophie; Bregeras, Olivier; Puaux, Didier; Gervais, Rémi; Ferry, Barbara
2016-01-01
Human scent identification is based on a matching-to-sample task in which trained dogs are required to compare a scent sample collected from an object found at a crime scene to that of a suspect. Based on dogs’ greater olfactory ability to detect and process odours, this method has been used in forensic investigations to identify the odour of a suspect at a crime scene. The excellent reliability and reproducibility of the method largely depend on rigor in dog training. The present study describes the various steps of training that lead to high sensitivity scores, with dogs matching samples with 90% efficiency when the complexity of the scents presented during the task in the sample is similar to that presented in the in lineups, and specificity reaching a ceiling, with no false alarms in human scent matching-to-sample tasks. This high level of accuracy ensures reliable results in judicial human scent identification tests. Also, our data should convince law enforcement authorities to use these results as official forensic evidence when dogs are trained appropriately. PMID:26863620
Automated Classification and Analysis of Non-metallic Inclusion Data Sets
NASA Astrophysics Data System (ADS)
Abdulsalam, Mohammad; Zhang, Tongsheng; Tan, Jia; Webler, Bryan A.
2018-05-01
The aim of this study is to utilize principal component analysis (PCA), clustering methods, and correlation analysis to condense and examine large, multivariate data sets produced from automated analysis of non-metallic inclusions. Non-metallic inclusions play a major role in defining the properties of steel and their examination has been greatly aided by automated analysis in scanning electron microscopes equipped with energy dispersive X-ray spectroscopy. The methods were applied to analyze inclusions on two sets of samples: two laboratory-scale samples and four industrial samples from a near-finished 4140 alloy steel components with varying machinability. The laboratory samples had well-defined inclusions chemistries, composed of MgO-Al2O3-CaO, spinel (MgO-Al2O3), and calcium aluminate inclusions. The industrial samples contained MnS inclusions as well as (Ca,Mn)S + calcium aluminate oxide inclusions. PCA could be used to reduce inclusion chemistry variables to a 2D plot, which revealed inclusion chemistry groupings in the samples. Clustering methods were used to automatically classify inclusion chemistry measurements into groups, i.e., no user-defined rules were required.
Calcium kinetics with microgram stable isotope doses and saliva sampling
NASA Technical Reports Server (NTRS)
Smith, S. M.; Wastney, M. E.; Nyquist, L. E.; Shih, C. Y.; Wiesmann, H.; Nillen, J. L.; Lane, H. W.
1996-01-01
Studies of calcium kinetics require administration of tracer doses of calcium and subsequent repeated sampling of biological fluids. This study was designed to develop techniques that would allow estimation of calcium kinetics by using small (micrograms) doses of isotopes instead of the more common large (mg) doses to minimize tracer perturbation of the system and reduce cost, and to explore the use of saliva sampling as an alternative to blood sampling. Subjects received an oral dose (133 micrograms) of 43Ca and an i.v. dose (7.7 micrograms) of 46Ca. Isotopic enrichment in blood, urine, saliva and feces was well above thermal ionization mass spectrometry measurement precision up to 170 h after dosing. Fractional calcium absorptions determined from isotopic ratios in blood, urine and saliva were similar. Compartmental modeling revealed that kinetic parameters determined from serum or saliva data were similar, decreasing the necessity for blood samples. It is concluded from these results that calcium kinetics can be assessed with micrograms doses of stable isotopes, thereby reducing tracer costs and with saliva samples, thereby reducing the amount of blood needed.
High-Throughput Analysis and Automation for Glycomics Studies.
Shubhakar, Archana; Reiding, Karli R; Gardner, Richard A; Spencer, Daniel I R; Fernandes, Daryl L; Wuhrer, Manfred
This review covers advances in analytical technologies for high-throughput (HTP) glycomics. Our focus is on structural studies of glycoprotein glycosylation to support biopharmaceutical realization and the discovery of glycan biomarkers for human disease. For biopharmaceuticals, there is increasing use of glycomics in Quality by Design studies to help optimize glycan profiles of drugs with a view to improving their clinical performance. Glycomics is also used in comparability studies to ensure consistency of glycosylation both throughout product development and between biosimilars and innovator drugs. In clinical studies there is as well an expanding interest in the use of glycomics-for example in Genome Wide Association Studies-to follow changes in glycosylation patterns of biological tissues and fluids with the progress of certain diseases. These include cancers, neurodegenerative disorders and inflammatory conditions. Despite rising activity in this field, there are significant challenges in performing large scale glycomics studies. The requirement is accurate identification and quantitation of individual glycan structures. However, glycoconjugate samples are often very complex and heterogeneous and contain many diverse branched glycan structures. In this article we cover HTP sample preparation and derivatization methods, sample purification, robotization, optimized glycan profiling by UHPLC, MS and multiplexed CE, as well as hyphenated techniques and automated data analysis tools. Throughout, we summarize the advantages and challenges with each of these technologies. The issues considered include reliability of the methods for glycan identification and quantitation, sample throughput, labor intensity, and affordability for large sample numbers.
Capozzi, Vittorio; Yener, Sine; Khomenko, Iuliia; Farneti, Brian; Cappellin, Luca; Gasperi, Flavia; Scampicchio, Matteo; Biasioli, Franco
2017-05-11
Proton Transfer Reaction (PTR), combined with a Time-of-Flight (ToF) Mass Spectrometer (MS) is an analytical approach based on chemical ionization that belongs to the Direct-Injection Mass Spectrometric (DIMS) technologies. These techniques allow the rapid determination of volatile organic compounds (VOCs), assuring high sensitivity and accuracy. In general, PTR-MS requires neither sample preparation nor sample destruction, allowing real time and non-invasive analysis of samples. PTR-MS are exploited in many fields, from environmental and atmospheric chemistry to medical and biological sciences. More recently, we developed a methodology based on coupling PTR-ToF-MS with an automated sampler and tailored data analysis tools, to increase the degree of automation and, consequently, to enhance the potential of the technique. This approach allowed us to monitor bioprocesses (e.g. enzymatic oxidation, alcoholic fermentation), to screen large sample sets (e.g. different origins, entire germoplasms) and to analyze several experimental modes (e.g. different concentrations of a given ingredient, different intensities of a specific technological parameter) in terms of VOC content. Here, we report the experimental protocols exemplifying different possible applications of our methodology: i.e. the detection of VOCs released during lactic acid fermentation of yogurt (on-line bioprocess monitoring), the monitoring of VOCs associated with different apple cultivars (large-scale screening), and the in vivo study of retronasal VOC release during coffee drinking (nosespace analysis).
Ahrens, Brian D; Kucherova, Yulia; Butch, Anthony W
2016-01-01
Sports drug testing laboratories are required to detect several classes of compounds that are prohibited at all times, which include anabolic agents, peptide hormones, growth factors, beta-2 agonists, hormones and metabolic modulators, and diuretics/masking agents. Other classes of compounds such as stimulants, narcotics, cannabinoids, and glucocorticoids are also prohibited, but only when an athlete is in competition. A single class of compounds can contain a large number of prohibited substances and all of the compounds should be detected by the testing procedure. Since there are almost 70 stimulants on the prohibited list it can be a challenge to develop a single screening method that will optimally detect all the compounds. We describe a combined liquid chromatography-tandem mass spectrometry (LC-MS/MS) and gas chromatography-mass spectrometry (GC-MS) testing method for detection of all the stimulants and narcotics on the World Anti-Doping Agency prohibited list. Urine for LC-MS/MS testing does not require sample pretreatment and is a direct dilute and shoot method. Urine samples for the GC-MS method require a liquid-liquid extraction followed by derivatization with trifluoroacetic anhydride.
Metabolomic profiling in perinatal asphyxia: a promising new field.
Denihan, Niamh M; Boylan, Geraldine B; Murray, Deirdre M
2015-01-01
Metabolomics, the latest "omic" technology, is defined as the comprehensive study of all low molecular weight biochemicals, "metabolites" present in an organism. As a systems biology approach, metabolomics has huge potential to progress our understanding of perinatal asphyxia and neonatal hypoxic-ischaemic encephalopathy, by uniquely detecting rapid biochemical pathway alterations in response to the hypoxic environment. The study of metabolomic biomarkers in the immediate neonatal period is not a trivial task and requires a number of specific considerations, unique to this disease and population. Recruiting a clearly defined cohort requires standardised multicentre recruitment with broad inclusion criteria and the participation of a range of multidisciplinary staff. Minimally invasive biospecimen collection is a priority for biomarker discovery. Umbilical cord blood presents an ideal medium as large volumes can be easily extracted and stored and the sample is not confounded by postnatal disease progression. Pristine biobanking and phenotyping are essential to ensure the validity of metabolomic findings. This paper provides an overview of the current state of the art in the field of metabolomics in perinatal asphyxia and neonatal hypoxic-ischaemic encephalopathy. We detail the considerations required to ensure high quality sampling and analysis, to support scientific progression in this important field.
Metabolomic Profiling in Perinatal Asphyxia: A Promising New Field
Denihan, Niamh M.; Boylan, Geraldine B.; Murray, Deirdre M.
2015-01-01
Metabolomics, the latest “omic” technology, is defined as the comprehensive study of all low molecular weight biochemicals, “metabolites” present in an organism. As a systems biology approach, metabolomics has huge potential to progress our understanding of perinatal asphyxia and neonatal hypoxic-ischaemic encephalopathy, by uniquely detecting rapid biochemical pathway alterations in response to the hypoxic environment. The study of metabolomic biomarkers in the immediate neonatal period is not a trivial task and requires a number of specific considerations, unique to this disease and population. Recruiting a clearly defined cohort requires standardised multicentre recruitment with broad inclusion criteria and the participation of a range of multidisciplinary staff. Minimally invasive biospecimen collection is a priority for biomarker discovery. Umbilical cord blood presents an ideal medium as large volumes can be easily extracted and stored and the sample is not confounded by postnatal disease progression. Pristine biobanking and phenotyping are essential to ensure the validity of metabolomic findings. This paper provides an overview of the current state of the art in the field of metabolomics in perinatal asphyxia and neonatal hypoxic-ischaemic encephalopathy. We detail the considerations required to ensure high quality sampling and analysis, to support scientific progression in this important field. PMID:25802843
Booij, Petra; Sjollema, Sascha B; Leonards, Pim E G; de Voogt, Pim; Stroomberg, Gerard J; Vethaak, A Dick; Lamoree, Marja H
2013-09-01
The extent to which chemical stressors affect primary producers in estuarine and coastal waters is largely unknown. However, given the large number of legacy pollutants and chemicals of emerging concern present in the environment, this is an important and relevant issue that requires further study. The purpose of our study was to extract and identify compounds which are inhibitors of photosystem II activity in microalgae from estuarine and coastal waters. Field sampling was conducted in the Western Scheldt estuary (Hansweert, The Netherlands). We compared four different commonly used extraction methods: passive sampling with silicone rubber sheets, polar organic integrative samplers (POCIS) and spot water sampling using two different solid phase extraction (SPE) cartridges. Toxic effects of extracts prepared from spot water samples and passive samplers were determined in the Pulse Amplitude Modulation (PAM) fluorometry bioassay. With target chemical analysis using LC-MS and GC-MS, a set of PAHs, PCBs and pesticides was determined in field samples. These compound classes are listed as priority substances for the marine environment by the OSPAR convention. In addition, recovery experiments with both SPE cartridges were performed to evaluate the extraction suitability of these methods. Passive sampling using silicone rubber sheets and POCIS can be applied to determine compounds with different structures and polarities for further identification and determination of toxic pressure on primary producers. The added value of SPE lies in its suitability for quantitative analysis; calibration of passive samplers still needs further investigation for quantification of field concentrations of contaminants. Copyright © 2013 Elsevier Ltd. All rights reserved.
Rapid Analysis of Deoxynivalenol in Durum Wheat by FT-NIR Spectroscopy
De Girolamo, Annalisa; Cervellieri, Salvatore; Visconti, Angelo; Pascale, Michelangelo
2014-01-01
Fourier-transform-near infrared (FT-NIR) spectroscopy has been used to develop quantitative and classification models for the prediction of deoxynivalenol (DON) levels in durum wheat samples. Partial least-squares (PLS) regression analysis was used to determine DON in wheat samples in the range of <50–16,000 µg/kg DON. The model displayed a large root mean square error of prediction value (1,977 µg/kg) as compared to the EU maximum limit for DON in unprocessed durum wheat (i.e., 1,750 µg/kg), thus making the PLS approach unsuitable for quantitative prediction of DON in durum wheat. Linear discriminant analysis (LDA) was successfully used to differentiate wheat samples based on their DON content. A first approach used LDA to group wheat samples into three classes: A (DON ≤ 1,000 µg/kg), B (1,000 < DON ≤ 2,500 µg/kg), and C (DON > 2,500 µg/kg) (LDA I). A second approach was used to discriminate highly contaminated wheat samples based on three different cut-off limits, namely 1,000 (LDA II), 1,200 (LDA III) and 1,400 µg/kg DON (LDA IV). The overall classification and false compliant rates for the three models were 75%–90% and 3%–7%, respectively, with model LDA IV using a cut-off of 1,400 µg/kg fulfilling the requirement of the European official guidelines for screening methods. These findings confirmed the suitability of FT-NIR to screen a large number of wheat samples for DON contamination and to verify the compliance with EU regulation. PMID:25384107
Rapid analysis of deoxynivalenol in durum wheat by FT-NIR spectroscopy.
De Girolamo, Annalisa; Cervellieri, Salvatore; Visconti, Angelo; Pascale, Michelangelo
2014-11-06
Fourier-transform-near infrared (FT-NIR) spectroscopy has been used to develop quantitative and classification models for the prediction of deoxynivalenol (DON) levels in durum wheat samples. Partial least-squares (PLS) regression analysis was used to determine DON in wheat samples in the range of <50-16,000 µg/kg DON. The model displayed a large root mean square error of prediction value (1,977 µg/kg) as compared to the EU maximum limit for DON in unprocessed durum wheat (i.e., 1,750 µg/kg), thus making the PLS approach unsuitable for quantitative prediction of DON in durum wheat. Linear discriminant analysis (LDA) was successfully used to differentiate wheat samples based on their DON content. A first approach used LDA to group wheat samples into three classes: A (DON ≤ 1,000 µg/kg), B (1,000 < DON ≤ 2,500 µg/kg), and C (DON > 2,500 µg/kg) (LDA I). A second approach was used to discriminate highly contaminated wheat samples based on three different cut-off limits, namely 1,000 (LDA II), 1,200 (LDA III) and 1,400 µg/kg DON (LDA IV). The overall classification and false compliant rates for the three models were 75%-90% and 3%-7%, respectively, with model LDA IV using a cut-off of 1,400 µg/kg fulfilling the requirement of the European official guidelines for screening methods. These findings confirmed the suitability of FT-NIR to screen a large number of wheat samples for DON contamination and to verify the compliance with EU regulation.
Kraemer, D; Chen, G
2014-02-01
Accurate measurements of thermal conductivity are of great importance for materials research and development. Steady-state methods determine thermal conductivity directly from the proportionality between heat flow and an applied temperature difference (Fourier Law). Although theoretically simple, in practice, achieving high accuracies with steady-state methods is challenging and requires rather complex experimental setups due to temperature sensor uncertainties and parasitic heat loss. We developed a simple differential steady-state method in which the sample is mounted between an electric heater and a temperature-controlled heat sink. Our method calibrates for parasitic heat losses from the electric heater during the measurement by maintaining a constant heater temperature close to the environmental temperature while varying the heat sink temperature. This enables a large signal-to-noise ratio which permits accurate measurements of samples with small thermal conductance values without an additional heater calibration measurement or sophisticated heater guards to eliminate parasitic heater losses. Additionally, the differential nature of the method largely eliminates the uncertainties of the temperature sensors, permitting measurements with small temperature differences, which is advantageous for samples with high thermal conductance values and/or with strongly temperature-dependent thermal conductivities. In order to accelerate measurements of more than one sample, the proposed method allows for measuring several samples consecutively at each temperature measurement point without adding significant error. We demonstrate the method by performing thermal conductivity measurements on commercial bulk thermoelectric Bi2Te3 samples in the temperature range of 30-150 °C with an error below 3%.
Coorevits, L; Heytens, S; Boelens, J; Claeys, G
2017-04-01
The workup and interpretation of urine cultures is not always clear-cut, especially for midstream samples contaminated with commensals. Standard urine culture (SUC) protocols are designed in favor of growth of uropathogens at the expense of commensals. In selected clinical situations, however, it is essential to trace fastidious or new uropathogens by expanding the urine culture conditions (EUC). The aim of our study was to map the microflora in midstream urine specimens from healthy controls by means of EUC, in view of the interpretation of bacterial culture results in symptomatic patients. Midstream urine specimens from 101 healthy controls (86 females and 15 males) were examined using both SUC and EUC. Whilst 73 % of samples examined by SUC showed no growth at 10 3 colony-forming units (CFU)/mL, 91 % of samples examined by EUC grew bacterial species in large numbers (≥10 4 CFU/mL). Asymptomatic bacteriuria, as defined by the European guidelines for urinalysis, was detected in six samples with both protocols. EUC revealed 98 different species, mostly Lactobacillus, Staphylococcus, Streptococcus, and Corynebacterium. None of the samples grew Staphylococcus saprophyticus, Corynebacterium urealyticum, or Aerococcus urinae. Samples from females contained higher bacterial loads and showed higher bacterial diversity compared to males. Midstream urine of healthy controls contains large communities of living bacteria that comprise a resident microflora, only revealed by EUC. Hence, the use of EUC instead of SUC in a routine setting would result in more sensitive but less specific results, requiring critical interpretation. In our view, EUC should be reserved for limited indications.
Cautionary Notes on Cosmogenic W-182 and Other Nuclei in Lunar Samples
NASA Technical Reports Server (NTRS)
Yin, Qingzhu; Jacobsen, Stein B.; Wasserburg, G. J.
2003-01-01
Leya et al. (2000) showed that neutron capture on Ta-181 results in a production rate of Ta-182 (decays with a half-life of 114 days to W-182) sufficiently high to cause significant shifts in W-182 abundances considering the neutron fluences due to the cosmic ray cascade that were known to occur near the lunar surface. Leya et al. concluded that this cosmogenic production of W-182 may explain the large positive epsilon(sub W-182) values that Lee et al. (1997) had reported in some lunar samples rather than being produced from decay of now extinct Hf-182 (bar tau = 13 x 10(exp 6) yr). If the large range in epsilon(sub W-182) of lunar samples (0 to +11 in whole rock samples) was due to decay of now extinct Hf-182, it would require a very early time of formation and differentiation of the lunar crust-mantle system (with high Hf/W ratios) during the earliest stages of Earth s accretion. This result was both surprising and difficult to understand. The ability to explain these results by a more plausible mechanism is therefore very attractive. In a recent report Lee et al. (2002) showed that there were excesses of W-182 and that epsilon(sub W-182) was correlated with the Ta/W ratios in the mineral phases of individual lunar rock samples. This is in accord with W-182 variations in lunar samples being produced by cosmic-ray induced neutron capture on Ta-182.
Scheibe, Andrea; Krantz, Lars; Gleixner, Gerd
2012-01-30
We assessed the accuracy and utility of a modified high-performance liquid chromatography/isotope ratio mass spectrometry (HPLC/IRMS) system for measuring the amount and stable carbon isotope signature of dissolved organic matter (DOM) <1 µm. Using a range of standard compounds as well as soil solutions sampled in the field, we compared the results of the HPLC/IRMS analysis with those from other methods for determining carbon and (13)C content. The conversion efficiency of the in-line wet oxidation of the HPLC/IRMS averaged 99.3% for a range of standard compounds. The agreement between HPLC/IRMS and other methods in the amount and isotopic signature of both standard compounds and soil water samples was excellent. For DOM concentrations below 10 mg C L(-1) (250 ng C total) pre-concentration or large volume injections are recommended in order to prevent background interferences. We were able to detect large differences in the (13)C signatures of soil solution DOM sampled in 10 cm depth of plots with either C3 or C4 vegetation and in two different parent materials. These measurements also demonstrated changes in the (13)C signature that demonstrate rapid loss of plant-derived C with depth. Overall the modified HLPC/IRMS system has the advantages of rapid sample preparation, small required sample volume and high sample throughput, while showing comparable performance with other methods for measuring the amount and isotopic signature of DOM. Copyright © 2011 John Wiley & Sons, Ltd.
Lu, David; Graf, Ryon P.; Harvey, Melissa; Madan, Ravi A.; Heery, Christopher; Marte, Jennifer; Beasley, Sharon; Tsang, Kwong Y.; Krupa, Rachel; Louw, Jessica; Wahl, Justin; Bales, Natalee; Landers, Mark; Marrinucci, Dena; Schlom, Jeffrey; Gulley, James L.; Dittamore, Ryan
2015-01-01
Retrospective analysis of patient tumour samples is a cornerstone of clinical research. CTC biomarker characterization offers a non-invasive method to analyse patient samples. However, current CTC technologies require prospective blood collection, thereby reducing the ability to utilize archived clinical cohorts with long-term outcome data. We sought to investigate CTC recovery from frozen, archived patient PBMC pellets. Matched samples from both mCRPC patients and mock samples, which were prepared by spiking healthy donor blood with cultured prostate cancer cell line cells, were processed “fresh” via Epic CTC Platform or from “frozen” PBMC pellets. Samples were analysed for CTC enumeration and biomarker characterization via immunofluorescent (IF) biomarkers, fluorescence in-situ hybridization (FISH) and CTC morphology. In the frozen patient PMBC samples, the median CTC recovery was 18%, compared to the freshly processed blood. However, abundance and localization of cytokeratin (CK) and androgen receptor (AR) protein, as measured by IF, were largely concordant between the fresh and frozen CTCs. Furthermore, a FISH analysis of PTEN loss showed high concordance in fresh vs. frozen. The observed data indicate that CTC biomarker characterization from frozen archival samples is feasible and representative of prospectively collected samples. PMID:28936240
Influence of sampling rate on the calculated fidelity of an aircraft simulation
NASA Technical Reports Server (NTRS)
Howard, J. C.
1983-01-01
One of the factors that influences the fidelity of an aircraft digital simulation is the sampling rate. As the sampling rate is increased, the calculated response of the discrete representation tends to coincide with the response of the corresponding continuous system. Because of computer limitations, however, the sampling rate cannot be increased indefinitely. Moreover, real-time simulation requirements demand that a finite sampling rate be adopted. In view of these restrictions, a study was undertaken to determine the influence of sampling rate on the response characteristics of a simulated aircraft describing short-period oscillations. Changes in the calculated response characteristics of the simulated aircraft degrade the fidelity of the simulation. In the present context, fidelity degradation is defined as the percentage change in those characteristics that have the greatest influence on pilot opinion: short period frequency omega, short period damping ratio zeta, and the product omega zeta. To determine the influence of the sampling period on these characteristics, the equations describing the response of a DC-8 aircraft to elevator control inputs were used. The results indicate that if the sampling period is too large, the fidelity of the simulation can be degraded.
Intensity of Territorial Marking Predicts Wolf Reproduction: Implications for Wolf Monitoring
García, Emilio J.
2014-01-01
Background The implementation of intensive and complex approaches to monitor large carnivores is resource demanding, restricted to endangered species, small populations, or small distribution ranges. Wolf monitoring over large spatial scales is difficult, but the management of such contentious species requires regular estimations of abundance to guide decision-makers. The integration of wolf marking behaviour with simple sign counts may offer a cost-effective alternative to monitor the status of wolf populations over large spatial scales. Methodology/Principal Findings We used a multi-sampling approach, based on the collection of visual and scent wolf marks (faeces and ground scratching) and the assessment of wolf reproduction using howling and observation points, to test whether the intensity of marking behaviour around the pup-rearing period (summer-autumn) could reflect wolf reproduction. Between 1994 and 2007 we collected 1,964 wolf marks in a total of 1,877 km surveyed and we searched for the pups' presence (1,497 howling and 307 observations points) in 42 sampling sites with a regular presence of wolves (120 sampling sites/year). The number of wolf marks was ca. 3 times higher in sites with a confirmed presence of pups (20.3 vs. 7.2 marks). We found a significant relationship between the number of wolf marks (mean and maximum relative abundance index) and the probability of wolf reproduction. Conclusions/Significance This research establishes a real-time relationship between the intensity of wolf marking behaviour and wolf reproduction. We suggest a conservative cutting point of 0.60 for the probability of wolf reproduction to monitor wolves on a regional scale combined with the use of the mean relative abundance index of wolf marks in a given area. We show how the integration of wolf behaviour with simple sampling procedures permit rapid, real-time, and cost-effective assessments of the breeding status of wolf packs with substantial implications to monitor wolves at large spatial scales. PMID:24663068
Online Low-Rank Representation Learning for Joint Multi-subspace Recovery and Clustering.
Li, Bo; Liu, Risheng; Cao, Junjie; Zhang, Jie; Lai, Yu-Kun; Liua, Xiuping
2017-10-06
Benefiting from global rank constraints, the lowrank representation (LRR) method has been shown to be an effective solution to subspace learning. However, the global mechanism also means that the LRR model is not suitable for handling large-scale data or dynamic data. For large-scale data, the LRR method suffers from high time complexity, and for dynamic data, it has to recompute a complex rank minimization for the entire data set whenever new samples are dynamically added, making it prohibitively expensive. Existing attempts to online LRR either take a stochastic approach or build the representation purely based on a small sample set and treat new input as out-of-sample data. The former often requires multiple runs for good performance and thus takes longer time to run, and the latter formulates online LRR as an out-ofsample classification problem and is less robust to noise. In this paper, a novel online low-rank representation subspace learning method is proposed for both large-scale and dynamic data. The proposed algorithm is composed of two stages: static learning and dynamic updating. In the first stage, the subspace structure is learned from a small number of data samples. In the second stage, the intrinsic principal components of the entire data set are computed incrementally by utilizing the learned subspace structure, and the low-rank representation matrix can also be incrementally solved by an efficient online singular value decomposition (SVD) algorithm. The time complexity is reduced dramatically for large-scale data, and repeated computation is avoided for dynamic problems. We further perform theoretical analysis comparing the proposed online algorithm with the batch LRR method. Finally, experimental results on typical tasks of subspace recovery and subspace clustering show that the proposed algorithm performs comparably or better than batch methods including the batch LRR, and significantly outperforms state-of-the-art online methods.
Development and experimental study of large size composite plasma immersion ion implantation device
NASA Astrophysics Data System (ADS)
Falun, SONG; Fei, LI; Mingdong, ZHU; Langping, WANG; Beizhen, ZHANG; Haitao, GONG; Yanqing, GAN; Xiao, JIN
2018-01-01
Plasma immersion ion implantation (PIII) overcomes the direct exposure limit of traditional beam-line ion implantation, and is suitable for the treatment of complex work-piece with large size. PIII technology is often used for surface modification of metal, plastics and ceramics. Based on the requirement of surface modification of large size insulating material, a composite full-directional PIII device based on RF plasma source and metal plasma source is developed in this paper. This device can not only realize gas ion implantation, but also can realize metal ion implantation, and can also realize gas ion mixing with metal ions injection. This device has two metal plasma sources and each metal source contains three cathodes. Under the condition of keeping the vacuum unchanged, the cathode can be switched freely. The volume of the vacuum chamber is about 0.94 m3, and maximum vacuum degree is about 5 × 10-4 Pa. The density of RF plasma in homogeneous region is about 109 cm-3, and plasma density in the ion implantation region is about 1010 cm-3. This device can be used for large-size sample material PIII treatment, the maximum size of the sample diameter up to 400 mm. The experimental results show that the plasma discharge in the device is stable and can run for a long time. It is suitable for surface treatment of insulating materials.
Measurement of super large radius optics in the detection of gravitational waves
NASA Astrophysics Data System (ADS)
Yang, Cheng; Han, Sen; Wu, Quanying; Liang, Binming; Hou, Changlun
2015-10-01
The existence of Gravitational Wave (GW) is one of the greatest predictions of Einstein's relative theory. It has played an important part in the radiation theory, black hole theory, space explore and so on. The GW detection has been an important aspect of modern physics. With the research proceeding further, there are still a lot of challenges existing in the interferometer which is the key instrument in GW detection especially the measurement of the super large radius optics. To solve this problem, one solution , Fizeau interference, for measuring the super large radius has been presented. We change the tradition that curved surface must be measured with a standard curved surface. We use a flat mirror as a reference flat and it can lower both the cost and the test requirement a lot. We select a concave mirror with the radius of 1600mm as a sample. After the precision measurement and analysis, the experimental results show that the relative error of radius is better than 3%, and it can fully meet the requirements of the measurement of super large radius optics. When calculating each pixel with standard cylinder, the edges are not sharp because of diffraction or some other reasons, we detect the edge and calculate the diameter of the cylinder automatically, and it can improve the precision a lot. In general, this method is simple, fast, non-traumatic, and highly precision, it can also provide us a new though in the measurement of super large radius optics.
Body mass estimates of hominin fossils and the evolution of human body size.
Grabowski, Mark; Hatala, Kevin G; Jungers, William L; Richmond, Brian G
2015-08-01
Body size directly influences an animal's place in the natural world, including its energy requirements, home range size, relative brain size, locomotion, diet, life history, and behavior. Thus, an understanding of the biology of extinct organisms, including species in our own lineage, requires accurate estimates of body size. Since the last major review of hominin body size based on postcranial morphology over 20 years ago, new fossils have been discovered, species attributions have been clarified, and methods improved. Here, we present the most comprehensive and thoroughly vetted set of individual fossil hominin body mass predictions to date, and estimation equations based on a large (n = 220) sample of modern humans of known body masses. We also present species averages based exclusively on fossils with reliable taxonomic attributions, estimates of species averages by sex, and a metric for levels of sexual dimorphism. Finally, we identify individual traits that appear to be the most reliable for mass estimation for each fossil species, for use when only one measurement is available for a fossil. Our results show that many early hominins were generally smaller-bodied than previously thought, an outcome likely due to larger estimates in previous studies resulting from the use of large-bodied modern human reference samples. Current evidence indicates that modern human-like large size first appeared by at least 3-3.5 Ma in some Australopithecus afarensis individuals. Our results challenge an evolutionary model arguing that body size increased from Australopithecus to early Homo. Instead, we show that there is no reliable evidence that the body size of non-erectus early Homo differed from that of australopiths, and confirm that Homo erectus evolved larger average body size than earlier hominins. Copyright © 2015 Elsevier Ltd. All rights reserved.
Winwood, Peter C; Lushington, Kurt
2006-12-01
This paper reports a study to determine if different types of work strain experienced by nurses, particularly those of an essentially psychological nature, such as emotional demand, mental effort and problems with peers and/or supervisors, have a differential impact on sleep quality and overall recovery from work strain, compared with physical work strains, and lead to higher maladaptive chronic fatigue outcomes. Various studies have shown that the dominant work-demand strain associated with nursing work can vary between different areas of nursing. For example, whereas emotional strain is reported to be the principal strain associated with work in areas such as oncology, haematology and renal units, medical and surgical unit nurses report work pace and staffing issues as the dominant work strain. Purely physical strain seems to be less commonly reported as a concern. A large sample (n = 760) of Australian nurses working in a large metropolitan hospital completed questionnaires on their work demands, sleep quality, fatigue, and recovery between shifts in January 2004. A high work pace exacerbates the psychological rather than the physical strain demands of nursing. Psychological strain affects sleep quality and impairs recovery from overall work strain between shifts. This combination is highly predictive of serious maladaptive stress/fatigue outcomes among nurses. Coping with psychological stressors adequately is an important requirement for nurses in order to avoid adverse health effects and maintain a long-term career in nursing. Appropriate training of undergraduate nursing students in managing the stresses they are likely to encounter would seem to be an essential requirement for the 21st century. Such training might constitute an important long-term component in overcoming the chronic nurse shortages evident in many countries.
Interpreting the Clustering of Distant Red Galaxies
NASA Astrophysics Data System (ADS)
Tinker, Jeremy L.; Wechsler, Risa H.; Zheng, Zheng
2010-01-01
We analyze the angular clustering of z ~ 2.3 distant red galaxies (DRGs) measured by Quardi et al. We find that, with robust estimates of the measurement errors and realistic halo occupation distribution modeling, the measured clustering can be well fit within standard halo occupation models, in contrast to previous results. However, in order to fit the strong break in w(θ) at θ = 10'', nearly all satellite galaxies in the DRG luminosity range are required to be DRGs. Within this luminosity-threshold sample, the fraction of galaxies that are DRGs is ~44%, implying that the formation of DRGs is more efficient for satellite galaxies than for central galaxies. Despite the evolved stellar populations contained within DRGs at z = 2.3, 90% of satellite galaxies in the DRG luminosity range have been accreted within 500 Myr. Thus, satellite DRGs must have known they would become satellites well before the time of their accretion. This implies that the formation of DRGs correlates with large-scale environment at fixed halo mass, although the large-scale bias of DRGs can be well fit without such assumptions. Further data are required to resolve this issue. Using the observational estimate that ~30% of DRGs have no ongoing star formation, we infer a timescale for star formation quenching for satellite galaxies of 450 Myr, although the uncertainty on this number is large. However, unless all non-star-forming satellite DRGs were quenched before accretion, the quenching timescale is significantly shorter than z ~ 0 estimates. Down to the completeness limit of the Quadri et al. sample, we find that the halo masses of central DRGs are ~50% higher than non-DRGs in the same luminosity range, but at the highest halo masses the central galaxies are DRGs only ~2/3 of the time.
Quantification of Protozoa and Viruses from Small Water Volumes
Bonilla, J. Alfredo; Bonilla, Tonya D.; Abdelzaher, Amir M.; Scott, Troy M.; Lukasik, Jerzy; Solo-Gabriele, Helena M.; Palmer, Carol J.
2015-01-01
Large sample volumes are traditionally required for the analysis of waterborne pathogens. The need for large volumes greatly limits the number of samples that can be processed. The goals of this study were to compare extraction and detection procedures for quantifying protozoan parasites and viruses from small volumes of marine water. The intent was to evaluate a logistically simpler method of sample collection and processing that would facilitate direct pathogen measures as part of routine monitoring programs. Samples were collected simultaneously using a bilayer device with protozoa capture by size (top filter) and viruses capture by charge (bottom filter). Protozoan detection technologies utilized for recovery of Cryptosporidium spp. and Giardia spp. were qPCR and the more traditional immunomagnetic separation—IFA-microscopy, while virus (poliovirus) detection was based upon qPCR versus plaque assay. Filters were eluted using reagents consistent with the downstream detection technologies. Results showed higher mean recoveries using traditional detection methods over qPCR for Cryptosporidium (91% vs. 45%) and poliovirus (67% vs. 55%) whereas for Giardia the qPCR-based methods were characterized by higher mean recoveries (41% vs. 28%). Overall mean recoveries are considered high for all detection technologies. Results suggest that simultaneous filtration may be suitable for isolating different classes of pathogens from small marine water volumes. More research is needed to evaluate the suitability of this method for detecting pathogens at low ambient concentration levels. PMID:26114244
Quantification of Protozoa and Viruses from Small Water Volumes.
Bonilla, J Alfredo; Bonilla, Tonya D; Abdelzaher, Amir M; Scott, Troy M; Lukasik, Jerzy; Solo-Gabriele, Helena M; Palmer, Carol J
2015-06-24
Large sample volumes are traditionally required for the analysis of waterborne pathogens. The need for large volumes greatly limits the number of samples that can be processed. The aims of this study were to compare extraction and detection procedures for quantifying protozoan parasites and viruses from small volumes of marine water. The intent was to evaluate a logistically simpler method of sample collection and processing that would facilitate direct pathogen measures as part of routine monitoring programs. Samples were collected simultaneously using a bilayer device with protozoa capture by size (top filter) and viruses capture by charge (bottom filter). Protozoan detection technologies utilized for recovery of Cryptosporidium spp. and Giardia spp. were qPCR and the more traditional immunomagnetic separation-IFA-microscopy, while virus (poliovirus) detection was based upon qPCR versus plaque assay. Filters were eluted using reagents consistent with the downstream detection technologies. Results showed higher mean recoveries using traditional detection methods over qPCR for Cryptosporidium (91% vs. 45%) and poliovirus (67% vs. 55%) whereas for Giardia the qPCR-based methods were characterized by higher mean recoveries (41% vs. 28%). Overall mean recoveries are considered high for all detection technologies. Results suggest that simultaneous filtration may be suitable for isolating different classes of pathogens from small marine water volumes. More research is needed to evaluate the suitability of this method for detecting pathogens at low ambient concentration levels.
Abal-Fabeiro, J L; Maside, X; Llovo, J; Bello, X; Torres, M; Treviño, M; Moldes, L; Muñoz, A; Carracedo, A; Bartolomé, C
2014-04-01
The epidemiological study of human cryptosporidiosis requires the characterization of species and subtypes involved in human disease in large sample collections. Molecular genotyping is costly and time-consuming, making the implementation of low-cost, highly efficient technologies increasingly necessary. Here, we designed a protocol based on MALDI-TOF mass spectrometry for the high-throughput genotyping of a panel of 55 single nucleotide variants (SNVs) selected as markers for the identification of common gp60 subtypes of four Cryptosporidium species that infect humans. The method was applied to a panel of 608 human and 63 bovine isolates and the results were compared with control samples typed by Sanger sequencing. The method allowed the identification of species in 610 specimens (90·9%) and gp60 subtype in 605 (90·2%). It displayed excellent performance, with sensitivity and specificity values of 87·3 and 98·0%, respectively. Up to nine genotypes from four different Cryptosporidium species (C. hominis, C. parvum, C. meleagridis and C. felis) were detected in humans; the most common ones were C. hominis subtype Ib, and C. parvum IIa (61·3 and 28·3%, respectively). 96·5% of the bovine samples were typed as IIa. The method performs as well as the widely used Sanger sequencing and is more cost-effective and less time consuming.
Gassner, Christoph; Rainer, Esther; Pircher, Elfriede; Markut, Lydia; Körmöczi, Günther F.; Jungbauer, Christof; Wessin, Dietmar; Klinghofer, Roswitha; Schennach, Harald; Schwind, Peter; Schönitzer, Diether
2009-01-01
Summary Background Validations of routinely used serological typing methods require intense performance evaluations typically including large numbers of samples before routine application. However, such evaluations could be improved considering information about the frequency of standard blood groups and their variants. Methods Using RHD and ABO population genetic data, a Caucasian-specific donor panel was compiled for a performance comparison of the three RhD and ABO serological typing methods MDmulticard (Medion Diagnostics), ID-System (DiaMed) and ScanGel (Bio-Rad). The final test panel included standard and variant RHD and ABO genotypes, e.g. RhD categories, partial and weak RhDs, RhD DELs, and ABO samples, mainly to interpret weak serological reactivity for blood group A specificity. All samples were from individuals recorded in our local DNA blood group typing database. Results For ‘standard’ blood groups, results of performance were clearly interpretable for all three serological methods compared. However, when focusing on specific variant phenotypes, pronounced differences in reaction strengths and specificities were observed between them. Conclusions A genetically and ethnically predefined donor test panel consisting of 93 individual samples only, delivered highly significant results for serological performance comparisons. Such small panels offer impressive representative powers, higher as such based on statistical chances and large numbers only. PMID:21113264
Baxter, Amanda J.; Hughes, Maria Celia; Kvaskoff, Marina; Siskind, Victor; Shekar, Sri; Aitken, Joanne F.; Green, Adele C.; Duffy, David L.; Hayward, Nicholas K.; Martin, Nicholas G.; Whiteman, David C.
2013-01-01
Cutaneous malignant melanoma (CMM) is a major health issue in Queensland, Australia which has the world’s highest incidence. Recent molecular and epidemiologic studies suggest that CMM arises through multiple etiological pathways involving gene-environment interactions. Understanding the potential mechanisms leading to CMM requires larger studies than those previously conducted. This article describes the design and baseline characteristics of Q-MEGA, the Queensland study of Melanoma: Environmental and Genetic Associations, which followed-up four population-based samples of CMM patients in Queensland, including children, adolescents, men aged over 50, and a large sample of adult cases and their families, including twins. Q-MEGA aims to investigate the roles of genetic and environmental factors, and their interaction, in the etiology of melanoma. 3,471 participants took part in the follow-up study and were administered a computer-assisted telephone interview in 2002–2005. Updated data on environmental and phenotypic risk factors, and 2,777 blood samples were collected from interviewed participants as well as a subset of relatives. This study provides a large and well-described population-based sample of CMM cases with follow-up data. Characteristics of the cases and repeatability of sun exposure and phenotype measures between the baseline and the follow-up surveys, from six to 17 years later, are also described. PMID:18361720
Ohneberg, K; Wolkewitz, M; Beyersmann, J; Palomar-Martinez, M; Olaechea-Astigarraga, P; Alvarez-Lerma, F; Schumacher, M
2015-01-01
Sampling from a large cohort in order to derive a subsample that would be sufficient for statistical analysis is a frequently used method for handling large data sets in epidemiological studies with limited resources for exposure measurement. For clinical studies however, when interest is in the influence of a potential risk factor, cohort studies are often the first choice with all individuals entering the analysis. Our aim is to close the gap between epidemiological and clinical studies with respect to design and power considerations. Schoenfeld's formula for the number of events required for a Cox' proportional hazards model is fundamental. Our objective is to compare the power of analyzing the full cohort and the power of a nested case-control and a case-cohort design. We compare formulas for power for sampling designs and cohort studies. In our data example we simultaneously apply a nested case-control design with a varying number of controls matched to each case, a case cohort design with varying subcohort size, a random subsample and a full cohort analysis. For each design we calculate the standard error for estimated regression coefficients and the mean number of distinct persons, for whom covariate information is required. The formula for the power of a nested case-control design and the power of a case-cohort design is directly connected to the power of a cohort study using the well known Schoenfeld formula. The loss in precision of parameter estimates is relatively small compared to the saving in resources. Nested case-control and case-cohort studies, but not random subsamples yield an attractive alternative for analyzing clinical studies in the situation of a low event rate. Power calculations can be conducted straightforwardly to quantify the loss of power compared to the savings in the num-ber of patients using a sampling design instead of analyzing the full cohort.