Modified procedure to determine acid-insoluble lignin in wood and pulp
DOE Office of Scientific and Technical Information (OSTI.GOV)
Effland, M.J.
1977-10-01
If wood is treated with strong acid, carbohydrates are hydrolyzed and solubilized. The insoluble residue is by definition lignin and can be measured gravimetrically. The standard method of analysis requires samples of 1 or 2 g of wood or pulp. In research at this laboratory these amounts of sample are often not available for analytical determinations. Thus we developed a modification of the standard procedure suitable for much smaller sample amounts. The modification is based on the procedure of Saeman. Wood samples require extraction prior to lignin analysis to remove acid-insoluble extractives that will be measured as lignin. Usually thismore » involves only a standard extraction with ethanol--benzene. However, woods high in tannin must also be subjected to extraction with alcohol. Pulps seldom require extraction.« less
Seltzer, Michael D
2003-09-01
Laser ablation of pressed soil pellets was examined as a means of direct sample introduction to enable inductively coupled plasma mass spectrometry (ICP-MS) screening of soils for residual depleted uranium (DU) contamination. Differentiation between depleted uranium, an anthropogenic contaminant, and naturally occurring uranium was accomplished on the basis of measured 235U/238U isotope ratios. The amount of sample preparation required for laser ablation is considerably less than that typically required for aqueous sample introduction. The amount of hazardous laboratory waste generated is diminished accordingly. During the present investigation, 235U/238U isotope ratios measured for field samples were in good agreement with those derived from gamma spectrometry measurements. However, substantial compensation was required to mitigate the effects of impaired pulse counting attributed to sample inhomogeneity and sporadic introduction of uranium analyte into the plasma.
Rapi, Stefano; Berardi, Margherita; Cellai, Filippo; Ciattini, Samuele; Chelazzi, Laura; Ognibene, Agostino; Rubeca, Tiziana
2017-07-24
Information on preanalytical variability is mandatory to bring laboratories up to ISO 15189 requirements. Fecal sampling is greatly affected by lack of harmonization in laboratory medicine. The aims of this study were to obtain information on the devices used for fecal sampling and to explore the effect of different amounts of feces on the results from the fecal immunochemical test for hemoglobin (FIT-Hb). Four commercial sample collection devices for quantitative FIT-Hb measurements were investigated. The volume of interest (VOI) of the probes was measured from diameter and geometry. Quantitative measurements of the mass of feces were carried out by gravimetry. The effects of an increased amount of feces on the analytical environment were investigated measuring the Hb values with a single analytical method. VOI was 8.22, 7.1 and 9.44 mm3 for probes that collected a target of 10 mg of feces, and 3.08 mm3 for one probe that targeted 2 mg of feces. The ratio between recovered and target amounts of devices ranged from 56% to 121%. Different changes in the measured Hb values were observed, in adding increasing amounts of feces in commercial buffers. The amounts of collected materials are related to the design of probes. Three out 4 manufacturers declare the same target amount using different sampling volumes and obtaining different amounts of collected materials. The introduction of a standard probes to reduce preanalytical variability could be an useful step for fecal test harmonization and to fulfill the ISO 15189 requirements.
Sampling wild species to conserve genetic diversity
USDA-ARS?s Scientific Manuscript database
Sampling seed from natural populations of crop wild relatives requires choice of the locations to sample from and the amount of seed to sample. While this may seem like a simple choice, in fact careful planning of a collector’s sampling strategy is needed to ensure that a crop wild collection will ...
Wang, Yu Annie; Wu, Di; Auclair, Jared R; Salisbury, Joseph P; Sarin, Richa; Tang, Yang; Mozdzierz, Nicholas J; Shah, Kartik; Zhang, Anna Fan; Wu, Shiaw-Lin; Agar, Jeffery N; Love, J Christopher; Love, Kerry R; Hancock, William S
2017-12-05
With the advent of biosimilars to the U.S. market, it is important to have better analytical tools to ensure product quality from batch to batch. In addition, the recent popularity of using a continuous process for production of biopharmaceuticals, the traditional bottom-up method, alone for product characterization and quality analysis is no longer sufficient. Bottom-up method requires large amounts of material for analysis and is labor-intensive and time-consuming. Additionally, in this analysis, digestion of the protein with enzymes such as trypsin could induce artifacts and modifications which would increase the complexity of the analysis. On the other hand, a top-down method requires a minimum amount of sample and allows for analysis of the intact protein mass and sequence generated from fragmentation within the instrument. However, fragmentation usually occurs at the N-terminal and C-terminal ends of the protein with less internal fragmentation. Herein, we combine the use of the complementary techniques, a top-down and bottom-up method, for the characterization of human growth hormone degradation products. Notably, our approach required small amounts of sample, which is a requirement due to the sample constraints of small scale manufacturing. Using this approach, we were able to characterize various protein variants, including post-translational modifications such as oxidation and deamidation, residual leader sequence, and proteolytic cleavage. Thus, we were able to highlight the complementarity of top-down and bottom-up approaches, which achieved the characterization of a wide range of product variants in samples of human growth hormone secreted from Pichia pastoris.
Screening for tinea unguium by Dermatophyte Test Strip.
Tsunemi, Y; Takehara, K; Miura, Y; Nakagami, G; Sanada, H; Kawashima, M
2014-02-01
The direct microscopy, fungal culture and histopathology that are necessary for the definitive diagnosis of tinea unguium are disadvantageous in that detection sensitivity is affected by the level of skill of the person who performs the testing, and the procedures take a long time. The Dermatophyte Test Strip, which was developed recently, can simply and easily detect filamentous fungi in samples in a short time, and there are expectations for its use as a method for tinea unguium screening. With this in mind, we examined the detection capacity of the Dermatophyte Test Strip for tinea unguium. The presence or absence of fungal elements was judged by direct microscopy and Dermatophyte Test Strip in 165 nail samples obtained from residents in nursing homes for the elderly. Moreover, the minimum sample amount required for positive determination was estimated using 32 samples that showed positive results by Dermatophyte Test Strip. The Dermatophyte Test Strip showed 98% sensitivity, 78% specificity, 84·8% positive predictive value, 97% negative predictive value and a positive and negative concordance rate of 89·1%. The minimum sample amount required for positive determination was 0·002-0·722 mg. The Dermatophyte Test Strip showed very high sensitivity and negative predictive value, and was considered a potentially useful method for tinea unguium screening. Positive determination was considered to be possible with a sample amount of about 1 mg. © 2013 British Association of Dermatologists.
DOT National Transportation Integrated Search
2003-06-30
This study investigated the effect of moisture and amount of baghouse fines on AC mixes. Two : types of baghouse fines, each with a different gradation, were used in varying concentrations to prepare : laboratory samples. The binder used was PG64-22 ...
Bayesian techniques for surface fuel loading estimation
Kathy Gray; Robert Keane; Ryan Karpisz; Alyssa Pedersen; Rick Brown; Taylor Russell
2016-01-01
A study by Keane and Gray (2013) compared three sampling techniques for estimating surface fine woody fuels. Known amounts of fine woody fuel were distributed on a parking lot, and researchers estimated the loadings using different sampling techniques. An important result was that precise estimates of biomass required intensive sampling for both the planar intercept...
Guideline on Isotope Dilution Mass Spectrometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaffney, Amy
Isotope dilution mass spectrometry is used to determine the concentration of an element of interest in a bulk sample. It is a destructive analysis technique that is applicable to a wide range of analytes and bulk sample types. With this method, a known amount of a rare isotope, or ‘spike’, of the element of interest is added to a known amount of sample. The element of interest is chemically purified from the bulk sample, the isotope ratio of the spiked sample is measured by mass spectrometry, and the concentration of the element of interest is calculated from this result. Thismore » method is widely used, although a mass spectrometer required for this analysis may be fairly expensive.« less
Stress Measurement by Geometrical Optics
NASA Technical Reports Server (NTRS)
Robinson, R. S.; Rossnagel, S. M.
1986-01-01
Fast, simple technique measures stresses in thin films. Sample disk bowed by stress into approximately spherical shape. Reflected image of disk magnified by amount related to curvature and, therefore, stress. Method requires sample substrate, such as cheap microscope cover slide, two mirrors, laser light beam, and screen.
Kreienbaum, M A; Page, D P
1986-01-01
The stability of pilocarpine hydrochloride and pilocarpine nitrate ophthalmic solutions stored in hospital pharmacies across the United States was studied. Through a voluntary drug stability program, FDA selected 252 samples (representing 11 manufacturers) from pharmacies representing an adequate cross section of the country. The samples were analyzed for strength, identification, pH, and isopilocarpine and pilocarpic acid impurities. All samples of pilocarpine nitrate met USP requirements. Eight samples of pilocarpine hydrochloride had tablets that exceeded the USP upper limit for strength. All of these samples were in 1- and 2-mL bottles. The amount of isopilocarpine found ranged from 1 to 6.4%, and the amount of pilocarpic acid from 1.5 to 10.1%. Although pilocarpine salts in ophthalmic solution decompose into isopilocarpine and pilocarpic acid under various conditions of storage, an amount of pilocarpine is maintained that is within the compendial limits. However, there is a problem of evaporation from some of the 1- and 2-mL containers in which this product is supplied.
ELECTROFISHING DISTANCE NEEDED TO ESTIMATE FISH SPECIES RICHNESS IN RAFTABLE WESTERN USA RIVERS
A critical issue in river monitoring is the minimum amount of sampling distance required to adequately represent the fish assemblage of a reach. Determining adequate sampling distance is important because it affects estimates of fish assemblage integrity and diversity at local a...
NASA Technical Reports Server (NTRS)
Pivirotto, Donna Shirley; Penn, Thomas J.; Dias, William C.
1989-01-01
Results of FY88 studies of a sample-collecting Mars rover are presented. A variety of rover concepts are discussed which include different technical approaches to rover functions. The performance of rovers with different levels of automation is described and compared to the science requirement for 20 to 40 km to be traversed on the Martian surface and for 100 rock and soil samples to be collected. The analysis shows that a considerable amount of automation in roving and sampling is required to meet this requirement. Additional performance evaluation shows that advanced RTG's producing 500 W and 350 WHr of battery storage are needed to supply the rover.
Omics for Precious Rare Biosamples: Characterization of Ancient Human Hair by a Proteomic Approach.
Fresnais, Margaux; Richardin, Pascale; Sepúlveda, Marcela; Leize-Wagner, Emmanuelle; Charrié-Duhaut, Armelle
2017-07-01
Omics technologies have far-reaching applications beyond clinical medicine. A case in point is the analysis of ancient hair samples. Indeed, hair is an important biological indicator that has become a material of choice in archeometry to study the ancient civilizations and their environment. Current characterization of ancient hair is based on elemental and structural analyses, but only few studies have focused on the molecular aspects of ancient hair proteins-keratins-and their conservation state. In such cases, applied extraction protocols require large amounts of raw hair, from 30 to 100 mg. In the present study, we report an optimized new proteomic approach to accurately identify archeological hair proteins, and assess their preservation state, while using a minimum of raw material. Testing and adaptation of three protocols and of nano liquid chromatography-tandem mass spectrometry (nanoLC-MS/MS) parameters were performed on modern hair. On the basis of mass spectrometry data quality, and of the required initial sample amount, the most promising workflow was selected and applied to an ancient archeological sample, dated to about 3880 years before present. Finally, and importantly, we were able to identify 11 ancient hair proteins and to visualize the preservation state of mummy's hair from only 500 μg of raw material. The results presented here pave the way for new insights into the understanding of hair protein alteration processes such as those due to aging and ecological exposures. This work could enable omics scientists to apply a proteomic approach to precious and rare samples, not only in the context of archeometrical studies but also for future applications that would require the use of very small amounts of sample.
Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A.
2016-01-01
Abstract. The use of a channelization mechanism on model observers not only makes mimicking human visual behavior possible, but also reduces the amount of image data needed to estimate the model observer parameters. The channelized Hotelling observer (CHO) and channelized scanning linear observer (CSLO) have recently been used to assess CT image quality for detection tasks and combined detection/estimation tasks, respectively. Although the use of channels substantially reduces the amount of data required to compute image quality, the number of scans required for CT imaging is still not practical for routine use. It is our desire to further reduce the number of scans required to make CHO or CSLO an image quality tool for routine and frequent system validations and evaluations. This work explores different data-reduction schemes and designs an approach that requires only a few CT scans. Three different kinds of approaches are included in this study: a conventional CHO/CSLO technique with a large sample size, a conventional CHO/CSLO technique with fewer samples, and an approach that we will show requires fewer samples to mimic conventional performance with a large sample size. The mean value and standard deviation of areas under ROC/EROC curve were estimated using the well-validated shuffle approach. The results indicate that an 80% data reduction can be achieved without loss of accuracy. This substantial data reduction is a step toward a practical tool for routine-task-based QA/QC CT system assessment. PMID:27493982
Taniguchi, Hidetaka; Sato, Hiroshi; Shirakawa, Tomohiro
2018-05-09
Human learners can generalize a new concept from a small number of samples. In contrast, conventional machine learning methods require large amounts of data to address the same types of problems. Humans have cognitive biases that promote fast learning. Here, we developed a method to reduce the gap between human beings and machines in this type of inference by utilizing cognitive biases. We implemented a human cognitive model into machine learning algorithms and compared their performance with the currently most popular methods, naïve Bayes, support vector machine, neural networks, logistic regression and random forests. We focused on the task of spam classification, which has been studied for a long time in the field of machine learning and often requires a large amount of data to obtain high accuracy. Our models achieved superior performance with small and biased samples in comparison with other representative machine learning methods.
Intelligent Sampling of Hazardous Particle Populations in Resource-Constrained Environments
NASA Astrophysics Data System (ADS)
McCollough, J. P.; Quinn, J. M.; Starks, M. J.; Johnston, W. R.
2017-10-01
Sampling of anomaly-causing space environment drivers is necessary for both real-time operations and satellite design efforts, and optimizing measurement sampling helps minimize resource demands. Relating these measurements to spacecraft anomalies requires the ability to resolve spatial and temporal variability in the energetic charged particle hazard of interest. Here we describe a method for sampling particle fluxes informed by magnetospheric phenomenology so that, along a given trajectory, the variations from both temporal dynamics and spatial structure are adequately captured while minimizing oversampling. We describe the coordinates, sampling method, and specific regions and parameters employed. We compare resulting sampling cadences with data from spacecraft spanning the regions of interest during a geomagnetically active period, showing that the algorithm retains the gross features necessary to characterize environmental impacts on space systems in diverse orbital regimes while greatly reducing the amount of sampling required. This enables sufficient environmental specification within a resource-constrained context, such as limited telemetry bandwidth, processing requirements, and timeliness.
Microfluidic point-of-care blood panel based on a novel technique: Reversible electroosmotic flow
Mohammadi, Mahdi; Madadi, Hojjat; Casals-Terré, Jasmina
2015-01-01
A wide range of diseases and conditions are monitored or diagnosed from blood plasma, but the ability to analyze a whole blood sample with the requirements for a point-of-care device, such as robustness, user-friendliness, and simple handling, remains unmet. Microfluidics technology offers the possibility not only to work fresh thumb-pricked whole blood but also to maximize the amount of the obtained plasma from the initial sample and therefore the possibility to implement multiple tests in a single cartridge. The microfluidic design presented in this paper is a combination of cross-flow filtration with a reversible electroosmotic flow that prevents clogging at the filter entrance and maximizes the amount of separated plasma. The main advantage of this design is its efficiency, since from a small amount of sample (a single droplet ∼10 μl) almost 10% of this (approx 1 μl) is extracted and collected with high purity (more than 99%) in a reasonable time (5–8 min). To validate the quality and quantity of the separated plasma and to show its potential as a clinical tool, the microfluidic chip has been combined with lateral flow immunochromatography technology to perform a qualitative detection of the thyroid-stimulating hormone and a blood panel for measuring cardiac Troponin and Creatine Kinase MB. The results from the microfluidic system are comparable to previous commercial lateral flow assays that required more sample for implementing fewer tests. PMID:26396660
Application of PLE for the determination of essential oil components from Thymus vulgaris L.
Dawidowicz, Andrzej L; Rado, Ewelina; Wianowska, Dorota; Mardarowicz, Marek; Gawdzik, Jan
2008-08-15
Essential plants, due to their long presence in human history, their status in culinary arts, their use in medicine and perfume manufacture, belong to frequently examined stock materials in scientific and industrial laboratories. Because of a large number of freshly cut, dried or frozen plant samples requiring the determination of essential oil amount and composition, a fast, safe, simple, efficient and highly automatic sample preparation method is needed. Five sample preparation methods (steam distillation, extraction in the Soxhlet apparatus, supercritical fluid extraction, solid phase microextraction and pressurized liquid extraction) used for the isolation of aroma-active components from Thymus vulgaris L. are compared in the paper. The methods are mainly discussed with regard to the recovery of components which typically exist in essential oil isolated by steam distillation. According to the obtained data, PLE is the most efficient sample preparation method in determining the essential oil from the thyme herb. Although co-extraction of non-volatile ingredients is the main drawback of this method, it is characterized by the highest yield of essential oil components and the shortest extraction time required. Moreover, the relative peak amounts of essential components revealed by PLE are comparable with those obtained by steam distillation, which is recognized as standard sample preparation method for the analysis of essential oils in aromatic plants.
Autonomous Sample Acquisition for Planetary and Small Body Explorations
NASA Technical Reports Server (NTRS)
Ghavimi, Ali R.; Serricchio, Frederick; Dolgin, Ben; Hadaegh, Fred Y.
2000-01-01
Robotic drilling and autonomous sample acquisition are considered as the key technology requirements in future planetary or small body exploration missions. Core sampling or subsurface drilling operation is envisioned to be off rovers or landers. These supporting platforms are inherently flexible, light, and can withstand only limited amount of reaction forces and torques. This, together with unknown properties of sampled materials, makes the sampling operation a tedious task and quite challenging. This paper highlights the recent advancements in the sample acquisition control system design and development for the in situ scientific exploration of planetary and small interplanetary missions.
Hong, Mineui; Bang, Heejin; Van Vrancken, Michael; Kim, Seungtae; Lee, Jeeyun; Park, Se Hoon; Park, Joon Oh; Park, Young Suk; Lim, Ho Yeong; Kang, Won Ki; Sun, Jong-Mu; Lee, Se Hoon; Ahn, Myung-Ju; Park, Keunchil; Kim, Duk Hwan; Lee, Seunggwan; Park, Woongyang; Kim, Kyoung-Mee
2017-01-01
To generate accurate next-generation sequencing (NGS) data, the amount and quality of DNA extracted is critical. We analyzed 1564 tissue samples from patients with metastatic or recurrent solid tumor submitted for NGS according to their sample size, acquisition method, organ, and fixation to propose appropriate tissue requirements. Of the 1564 tissue samples, 481 (30.8%) consisted of fresh-frozen (FF) tissue, and 1,083 (69.2%) consisted of formalin-fixed paraffin-embedded (FFPE) tissue. We obtained successful NGS results in 95.9% of cases. Out of 481 FF biopsies, 262 tissue samples were from lung, and the mean fragment size was 2.4 mm. Compared to lung, GI tract tumor fragments showed a significantly lower DNA extraction failure rate (2.1 % versus 6.1%, p = 0.04). For FFPE biopsy samples, the size of biopsy tissue was similar regardless of tumor type with a mean of 0.8 × 0.3 cm, and the mean DNA yield per one unstained slide was 114 ng. We obtained highest amount of DNA from the colorectum (2353 ng) and the lowest amount from the hepatobiliary tract (760.3 ng) likely due to a relatively smaller biopsy size, extensive hemorrhage and necrosis, and lower tumor volume. On one unstained slide from FFPE operation specimens, the mean size of the specimen was 2.0 × 1.0 cm, and the mean DNA yield per one unstained slide was 1800 ng. In conclusions, we present our experiences on tissue requirements for appropriate NGS workflow: > 1 mm2 for FF biopsy, > 5 unstained slides for FFPE biopsy, and > 1 unstained slide for FFPE operation specimens for successful test results in 95.9% of cases. PMID:28477007
NASA Astrophysics Data System (ADS)
Kántor, T.; Maestre, S.; de Loos-Vollebregt, M. T. C.
2005-10-01
In the present work electrothermal vaporization (ETV) was used in both inductively coupled plasma mass spectrometry (ICP-MS) and optical emission spectrometry (OES) for sample introduction of solution samples. The effect of (Pd + Mg)-nitrate modifier and CaCl 2 matrix/modifier of variable amounts were studied on ETV-ICP-MS signals of Cr, Cu, Fe, Mn and Pb and on ETV-ICP-OES signals of Ag, Cd, Co, Cu, Fe, Ga, Mn and Zn. With the use of matrix-free standard solutions the analytical curves were bent to the signal axes (as expected from earlier studies), which was observed in the 20-800 pg mass range by ICP-MS and in the 1-50 ng mass range by ICP-OES detection. The degree of curvature was, however, different with the use of single element and multi-element standards. When applying the noted chemical modifiers (aerosol carriers) in microgram amounts, linear analytical curves were found in the nearly two orders of magnitude mass ranges. Changes of the CaCl 2 matrix concentration (loaded amount of 2-10 μg Ca) resulted in less than 5% changes in MS signals of 5 elements (each below 1 ng) and OES signals of 22 analytes (each below 15 ng). Exceptions were Pb (ICP-MS) and Cd (ICP-OES), where the sensitivity increase by Pd + Mg modifier was much larger compared to other elements studied. The general conclusions suggest that quantitative analysis with the use of ETV sample introduction requires matrix matching or matrix replacement by appropriate chemical modifier to the specific concentration ranges of analytes. This is a similar requirement to that claimed also by the most commonly used pneumatic nebulization of solutions, if samples with high matrix concentration are concerned.
Fraisier, V; Clouvel, G; Jasaitis, A; Dimitrov, A; Piolot, T; Salamero, J
2015-09-01
Multiconfocal microscopy gives a good compromise between fast imaging and reasonable resolution. However, the low intensity of live fluorescent emitters is a major limitation to this technique. Aberrations induced by the optical setup, especially the mismatch of the refractive index and the biological sample itself, distort the point spread function and further reduce the amount of detected photons. Altogether, this leads to impaired image quality, preventing accurate analysis of molecular processes in biological samples and imaging deep in the sample. The amount of detected fluorescence can be improved with adaptive optics. Here, we used a compact adaptive optics module (adaptive optics box for sectioning optical microscopy), which was specifically designed for spinning disk confocal microscopy. The module overcomes undesired anomalies by correcting for most of the aberrations in confocal imaging. Existing aberration detection methods require prior illumination, which bleaches the sample. To avoid multiple exposures of the sample, we established an experimental model describing the depth dependence of major aberrations. This model allows us to correct for those aberrations when performing a z-stack, gradually increasing the amplitude of the correction with depth. It does not require illumination of the sample for aberration detection, thus minimizing photobleaching and phototoxicity. With this model, we improved both signal-to-background ratio and image contrast. Here, we present comparative studies on a variety of biological samples. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Preanalytical requirements of urinalysis
Delanghe, Joris; Speeckaert, Marijn
2014-01-01
Urine may be a waste product, but it contains an enormous amount of information. Well-standardized procedures for collection, transport, sample preparation and analysis should become the basis of an effective diagnostic strategy for urinalysis. As reproducibility of urinalysis has been greatly improved due to recent technological progress, preanalytical requirements of urinalysis have gained importance and have become stricter. Since the patients themselves often sample urine specimens, urinalysis is very susceptible to preanalytical issues. Various sampling methods and inappropriate specimen transport can cause important preanalytical errors. The use of preservatives may be helpful for particular analytes. Unfortunately, a universal preservative that allows a complete urinalysis does not (yet) exist. The preanalytical aspects are also of major importance for newer applications (e.g. metabolomics). The present review deals with the current preanalytical problems and requirements for the most common urinary analytes. PMID:24627718
NASA Astrophysics Data System (ADS)
Hodgson, Lorna; Thompson, Andrew
2012-03-01
This paper presents the results of a non-HMDS (non-silane) adhesion promoter that was used to reduce the zeta potential for very thin (proprietary) polymer on silicon. By reducing the zeta potential, as measured by the minimum sample required to fully coat a wafer, the amount of polymer required to coat silicon substrates was significantly reduced in the manufacture of X-ray windows used for high transmission of low-energy X-rays. Moreover, this approach used aqueous based adhesion promoter described as a cationic surface active agent that has been shown to improve adhesion of photoresists (positive, negative, epoxy [SU8], e-beam and dry film). As well as reducing the amount of polymer required to coat substrates, this aqueous adhesion promoter is nonhazardous, and contains non-volatile solvents.
NASA Astrophysics Data System (ADS)
Bitner, Rex M.; Koller, Susan C.
2004-06-01
Three different methods of automated high throughput purification of genomic DNA from plant materials processed in 96 well plates are described. One method uses MagneSil paramagnetic particles to purify DNA present in single leaf punch samples or small seed samples, using 320ul capacity 96 well plates which minimizes reagent and plate costs. A second method uses 2.2 ml and 1.2 ml capacity plates and allows the purification of larger amounts of DNA from 5-6 punches of materials or larger amounts of seeds. The third method uses the MagneSil ONE purification system to purify a fixed amount of DNA, thus simplifying the processing of downstream applications by normalizing the amounts of DNA so they do not require quantitation. Protocols for the purification of a fixed yield of DNA, e.g. 1 ug, from plant leaf or seed samples using MagneSil paramagnetic particles and a Beckman-Coulter BioMek FX robot are described. DNA from all three methods is suitable for applications such as PCR, RAPD, STR, READIT SNP analysis, and multiplexed PCR systems. The MagneSil ONE system is also suitable for use with SNP detection systems such as Third Wave Technology"s Invader methods.
FINDING THE BALANCE - QUALITY ASSURANCE REQUIREMENTS VS. RESEARCH NEEDS
Investigators often misapply quality assurance (QA) procedures and may consider QA as a hindrance to developing test plans for sampling and analysis. If used properly, however, QA is the driving force for collecting the right kind and proper amount of data. Researchers must use Q...
FINDING THE BALANCE - QUALITY ASSURANCE REQUIREMENTS VS. RESEARCH NEEDS
Investigators often misapply quality assurance (QA) procedures and may consider QA as a hindrance to developing test plans for
sampling and analysis. If used properly, however, QA is the driving force for collecting the right kind and proper amount of data.
Researchers must...
Comparison of Impurities in Charcoal Sorbents Found by Neutron Activation Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doll, Charles G.; Finn, Erin C.; Cantaloub, Michael G.
2013-01-01
Abstract: Neutron activation of gas samples in a reactor often requires a medium to retain sufficient amounts of the gas for analysis. Charcoal is commonly used to adsorb gas and hold it for activation; however, the amount of activated sodium in the charcoal after irradiation swamps most signals of interest. Neutron activation analysis (NAA) was performed on several commonly available charcoal samples in an effort to determine the activation background. The results for several elements, including the dominant sodium element, are reported. It was found that ECN charcoal had the lowest elemental background, containing sodium at 2.65 ± 0.05 ppm,more » as well as trace levels of copper and tungsten.« less
Identification of carbohydrate anomers using ion mobility-mass spectrometry.
Hofmann, J; Hahm, H S; Seeberger, P H; Pagel, K
2015-10-08
Carbohydrates are ubiquitous biological polymers that are important in a broad range of biological processes. However, owing to their branched structures and the presence of stereogenic centres at each glycosidic linkage between monomers, carbohydrates are harder to characterize than are peptides and oligonucleotides. Methods such as nuclear magnetic resonance spectroscopy can be used to characterize glycosidic linkages, but this technique requires milligram amounts of material and cannot detect small amounts of coexisting isomers. Mass spectrometry, on the other hand, can provide information on carbohydrate composition and connectivity for even small amounts of sample, but it cannot be used to distinguish between stereoisomers. Here, we demonstrate that ion mobility-mass spectrometry--a method that separates molecules according to their mass, charge, size, and shape--can unambiguously identify carbohydrate linkage-isomers and stereoisomers. We analysed six synthetic carbohydrate isomers that differ in composition, connectivity, or configuration. Our data show that coexisting carbohydrate isomers can be identified, and relative concentrations of the minor isomer as low as 0.1 per cent can be detected. In addition, the analysis is rapid, and requires no derivatization and only small amounts of sample. These results indicate that ion mobility-mass spectrometry is an effective tool for the analysis of complex carbohydrates. This method could have an impact on the field of carbohydrate synthesis similar to that of the advent of high-performance liquid chromatography on the field of peptide assembly in the late 1970s.
Identification of carbohydrate anomers using ion mobility-mass spectrometry
NASA Astrophysics Data System (ADS)
Hofmann, J.; Hahm, H. S.; Seeberger, P. H.; Pagel, K.
2015-10-01
Carbohydrates are ubiquitous biological polymers that are important in a broad range of biological processes. However, owing to their branched structures and the presence of stereogenic centres at each glycosidic linkage between monomers, carbohydrates are harder to characterize than are peptides and oligonucleotides. Methods such as nuclear magnetic resonance spectroscopy can be used to characterize glycosidic linkages, but this technique requires milligram amounts of material and cannot detect small amounts of coexisting isomers. Mass spectrometry, on the other hand, can provide information on carbohydrate composition and connectivity for even small amounts of sample, but it cannot be used to distinguish between stereoisomers. Here, we demonstrate that ion mobility-mass spectrometry--a method that separates molecules according to their mass, charge, size, and shape--can unambiguously identify carbohydrate linkage-isomers and stereoisomers. We analysed six synthetic carbohydrate isomers that differ in composition, connectivity, or configuration. Our data show that coexisting carbohydrate isomers can be identified, and relative concentrations of the minor isomer as low as 0.1 per cent can be detected. In addition, the analysis is rapid, and requires no derivatization and only small amounts of sample. These results indicate that ion mobility-mass spectrometry is an effective tool for the analysis of complex carbohydrates. This method could have an impact on the field of carbohydrate synthesis similar to that of the advent of high-performance liquid chromatography on the field of peptide assembly in the late 1970s.
Imaging samples larger than the field of view: the SLS experience
NASA Astrophysics Data System (ADS)
Vogiatzis Oikonomidis, Ioannis; Lovric, Goran; Cremona, Tiziana P.; Arcadu, Filippo; Patera, Alessandra; Schittny, Johannes C.; Stampanoni, Marco
2017-06-01
Volumetric datasets with micrometer spatial and sub-second temporal resolutions are nowadays routinely acquired using synchrotron X-ray tomographic microscopy (SRXTM). Although SRXTM technology allows the examination of multiple samples with short scan times, many specimens are larger than the field-of-view (FOV) provided by the detector. The extension of the FOV in the direction perpendicular to the rotation axis remains non-trivial. We present a method that can efficiently increase the FOV merging volumetric datasets obtained by region-of-interest tomographies in different 3D positions of the sample with a minimal amount of artefacts and with the ability to handle large amounts of data. The method has been successfully applied for the three-dimensional imaging of a small number of mouse lung acini of intact animals, where pixel sizes down to the micrometer range and short exposure times are required.
How much is enough? An analysis of CD measurement amount for mask characterization
NASA Astrophysics Data System (ADS)
Ullrich, Albrecht; Richter, Jan
2009-10-01
The demands on CD (critical dimension) metrology amount in terms of both reproducibility and measurement uncertainty steadily increase from node to node. Different mask characterization requirements have to be addressed like very small features, unevenly distributed features, contacts, semi-dense structures to name only a few. Usually this enhanced need is met by an increasing number of CD measurements, where the new CD requirements are added to the well established CD characterization recipe. This leads straight forwardly to prolonged cycle times and highly complex evaluation routines. At the same time mask processes are continuously improved to become more stable. The enhanced stability offers potential to actually reduce the number of measurements. Thus, in this work we will start to address the fundamental question of how many CD measurements are needed for mask characterization for a given confidence level. We used analysis of variances (ANOVA) to distinguish various contributors like mask making process, measurement tool stability and measurement methodology. These contributions have been investigated for classical photomask CD specifications e.g. mean to target, CD uniformity, target offset tolerance and x-y bias. We found depending on specification that the importance of the contributors interchanges. Interestingly, not only short and long-term metrology contributions are dominant. Also the number of measurements and their spatial distribution on the mask layout (sampling methodology) can be the most important part of the variance. The knowledge of contributions can be used to optimize the sampling plan. As a major finding, we conclude that there is potential to reduce a significant amount of measurements without loosing confidence at all. Here, full sampling in x and y as well as full sampling for different features can be shortened substantially almost up to 50%.
Direct analysis of organic priority pollutants by IMS
NASA Technical Reports Server (NTRS)
Giam, C. S.; Reed, G. E.; Holliday, T. L.; Chang, L.; Rhodes, B. J.
1995-01-01
Many routine methods for monitoring of trace amounts of atmospheric organic pollutants consist of several steps. Typical steps are: (1) collection of the air sample; (2) trapping of organics from the sample; (3) extraction of the trapped organics; and (4) identification of the organics in the extract by GC (gas chromatography), HPLC (High Performance Liquid Chromatography), or MS (Mass Spectrometry). These methods are often cumbersome and time consuming. A simple and fast method for monitoring atmospheric organics using an IMS (Ion Mobility Spectrometer) is proposed. This method has a short sampling time and does not require extraction of the organics since the sample is placed directly in the IMS. The purpose of this study was to determine the responses in the IMS to organic 'priority pollutants'. Priority pollutants including representative polycyclic aromatic hydrocarbons (PAHs), phthalates, phenols, chlorinated pesticides, and polychlorinated biphenyls (PCB's) were analyzed in both the positive and negative detection mode at ambient atmospheric pressure. Detection mode and amount detected are presented.
Low-cost floating emergence net and bottle trap: Comparison of two designs
Cadmus, Pete; Pomeranz, Justin; Kraus, Johanna M.
2016-01-01
Sampling emergent aquatic insects is of interest to many freshwater ecologists. Many quantitative emergence traps require the use of aspiration for collection. However, aspiration is infeasible in studies with large amounts of replication that is often required in large biomonitoring projects. We designed an economic, collapsible pyramid-shaped floating emergence trap with an external collection bottle that avoids the need for aspiration. This design was compared experimentally to a design of similar dimensions that relied on aspiration to ensure comparable results. The pyramid-shaped design captured twice as many total emerging insects. When a preservative was used in bottle collectors, >95% of the emergent abundance was collected in the bottle. When no preservative was used, >81% of the total insects were collected from the bottle. In addition to capturing fewer emergent insects, the traps that required aspiration took significantly longer to sample. Large studies and studies sampling remote locations could benefit from the economical construction, speed of sampling, and capture efficiency.
USDA-ARS?s Scientific Manuscript database
Market demands for cotton varieties with improved fiber properties also call for the development of fast, reliable analytical methods for monitoring fiber development and measuring their properties. Currently, cotton breeders rely on instrumentation that can require significant amounts of sample, w...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-18
... import explosive materials or ammonium nitrate must, when required by the Director, furnish samples of such explosive materials or ammonium nitrate; information on chemical composition of those products... ammonium nitrate. (5) An estimate of the total number of respondents and the amount of time estimated for...
Sampling Mars: Analytical requirements and work to do in advance
NASA Technical Reports Server (NTRS)
Koeberl, Christian
1988-01-01
Sending a mission to Mars to collect samples and return them to the Earth for analysis is without doubt one of the most exciting and important tasks for planetary science in the near future. Many scientifically important questions are associated with the knowledge of the composition and structure of Martian samples. Amongst the most exciting questions is the clarification of the SNC problem- to prove or disprove a possible Martian origin of these meteorites. Since SNC meteorites have been used to infer the chemistry of the planet Mars, and its evolution (including the accretion history), it would be important to know if the whole story is true. But before addressing possible scientific results, we have to deal with the analytical requirements, and with possible pre-return work. It is unlikely to expect that a possible Mars sample return mission will bring back anything close to the amount returned by the Apollo missions. It will be more like the amount returned by the Luna missions, or at least in that order of magnitude. This requires very careful sample selection, and very precise analytical techniques. These techniques should be able to use minimal sample sizes and on the other hand optimize the scientific output. The possibility to work with extremely small samples should not obstruct another problem: possible sampling errors. As we know from terrestrial geochemical studies, sampling procedures are quite complicated and elaborate to ensure avoiding sampling errors. The significance of analyzing a milligram or submilligram sized sample and putting that in relationship with the genesis of whole planetary crusts has to be viewed with care. This leaves a dilemma on one hand, to minimize the sample size as far as possible in order to have the possibility of returning as many different samples as possible, and on the other hand to take a sample large enough to be representative. Whole rock samples are very useful, but should not exceed the 20 to 50 g range, except in cases of extreme inhomogeneity, because for larger samples the information tends to become redundant. Soil samples should be in the 2 to 10 g range, permitting the splitting of the returned samples for studies in different laboratories with variety of techniques.
Honrado, Carlos; Dong, Tao
2014-01-01
Incidence of urinary tract infections (UTIs) is the second highest among all infections; thus, there is a high demand for bacteriuria detection. Escherichia coli are the main cause of UTIs, with microscopy methods and urine culture being the detection standard of these bacteria. However, the urine sampling and analysis required for these methods can be both time-consuming and complex. This work proposes a capacitive touch screen sensor (CTSS) concept as feasible alternative for a portable UTI detection device. Finite element method (FEM) simulations were conducted with a CTSS model. An exponential response of the model to increasing amounts of E. coli and liquid samples was observed. A measurable capacitance change due to E. coli presence and a tangible difference in the response given to urine and water samples were also detected. Preliminary experimental studies were also conducted on a commercial CTSS using liquid solutions with increasing amounts of dissolved ions. The CTSS was capable of distinguishing different volumes of liquids, also giving an exponential response. Furthermore, the CTSS gave higher responses to solutions with a superior amount of ions. Urine samples gave the top response among tested liquids. Thus, the CTSS showed the capability to differentiate solutions by their ionic content. PMID:25196109
Multiplex cDNA quantification method that facilitates the standardization of gene expression data
Gotoh, Osamu; Murakami, Yasufumi; Suyama, Akira
2011-01-01
Microarray-based gene expression measurement is one of the major methods for transcriptome analysis. However, current microarray data are substantially affected by microarray platforms and RNA references because of the microarray method can provide merely the relative amounts of gene expression levels. Therefore, valid comparisons of the microarray data require standardized platforms, internal and/or external controls and complicated normalizations. These requirements impose limitations on the extensive comparison of gene expression data. Here, we report an effective approach to removing the unfavorable limitations by measuring the absolute amounts of gene expression levels on common DNA microarrays. We have developed a multiplex cDNA quantification method called GEP-DEAN (Gene expression profiling by DCN-encoding-based analysis). The method was validated by using chemically synthesized DNA strands of known quantities and cDNA samples prepared from mouse liver, demonstrating that the absolute amounts of cDNA strands were successfully measured with a sensitivity of 18 zmol in a highly multiplexed manner in 7 h. PMID:21415008
Adsorption behavior of natural anthocyanin dye on mesoporous silica
NASA Astrophysics Data System (ADS)
Kohno, Yoshiumi; Haga, Eriko; Yoda, Keiko; Shibata, Masashi; Fukuhara, Choji; Tomita, Yasumasa; Maeda, Yasuhisa; Kobayashi, Kenkichiro
2014-01-01
Because of its non-toxicity, naturally occurring anthocyanin is potentially suitable as a colorant for foods and cosmetics. To the wider use of the anthocyanin, the immobilization on the inorganic host for an easy handling as well as the improvement of the stability is required. This study is focused on the adsorption of significant amount of the natural anthocyanin dye onto mesoporous silica, and on the stability enhancement of the anthocyanin by the complexation. The anthocyanin has successfully been adsorbed on the HMS type mesoporous silica containing small amount of aluminum. The amount of the adsorbed anthocyanin has been increased by modifying the pore wall with n-propyl group to make the silica surface hydrophobic. The light fastness of the adsorbed anthocyanin has been improved by making the composite with the HMS samples containing aluminum, although the degree of the improvement is not so large. It has been proposed that incorporation of the anthocyanin molecule deep inside the mesopore is required for the further enhancement of the stability.
Austin, Melissa C; Smith, Christina; Pritchard, Colin C; Tait, Jonathan F
2016-02-01
Complex molecular assays are increasingly used to direct therapy and provide diagnostic and prognostic information but can require relatively large amounts of DNA. To provide data to pathologists to help them assess tissue adequacy and provide prospective guidance on the amount of tissue that should be procured. We used slide-based measurements to establish a relationship between processed tissue volume and DNA yield by A260 from 366 formalin-fixed, paraffin-embedded tissue samples submitted for the 3 most common molecular assays performed in our laboratory (EGFR, KRAS, and BRAF). We determined the average DNA yield per unit of tissue volume, and we used the distribution of DNA yields to calculate the minimum volume of tissue that should yield sufficient DNA 99% of the time. All samples with a volume greater than 8 mm(3) yielded at least 1 μg of DNA, and more than 80% of samples producing less than 1 μg were extracted from less than 4 mm(3) of tissue. Nine square millimeters of tissue should produce more than 1 μg of DNA 99% of the time. We conclude that 2 tissue cores, each 1 cm long and obtained with an 18-gauge needle, will almost always provide enough DNA for complex multigene assays, and our methodology may be readily extrapolated to individual institutional practice.
Madej, Katarzyna; Janiga, Katarzyna; Piekoszewski, Wojciech
2018-01-01
Isolation conditions for five pesticides (metazachlor, tebuconazole, λ -cyhalothrin, chlorpyrifos, and deltamethrin) from rape oil samples were examined using the dispersive solid-phase graphene extraction technique. To determine the optimal extraction conditions, a number of experimental factors (amount of graphene, amount of salt, type and volume of the desorbing solvent, desorption time with and without sonication energy, and temperature during desorption) were studied. The compounds of interest were separated and detected by an HPLC-UV employing a Kinetex XB-C18 column and a mobile phase consisting of acetonitrile and water flowing in a gradient mode. The optimized extraction conditions were: the amount of graphene 15 mg, desorbing solvent (acetonitrile) 5 mL, time desorption 10 min at 40°C, and amount of NaCl 1 g. The detection limit for metazachlor, tebuconazole, λ -cyhalothrin, and chlorpyrifos was 62.5 ng·g -1 , and for deltamethrin, it was 500 ng·g -1 . The obtained results lead to the conclusion that graphene may be successfully used for the isolation of the five pesticides from rape oil. However, their determination at low concentration levels, as they occur in real oil samples, requires the employment of appropriately highly sensitive analytical methods, as well as a more suitable graphene form (e.g., magnetically modified graphene).
Assays for the activities of polyamine biosynthetic enzymes using intact tissues
Rakesh Minocha; Stephanie Long; Hisae Maki; Subhash C. Minocha
1999-01-01
Traditionally, most enzyme assays utilize homogenized cell extracts with or without dialysis. Homogenization and centrifugation of large numbers of samples for screening of mutants and transgenic cell lines is quite cumbersome and generally requires sufficiently large amounts (hundreds of milligrams) of tissue. However, in situations where the tissue is available in...
Using sampling theory as the basis for a conceptual data model
Fred C. Martin; Tonya Baggett; Tom Wolfe
2000-01-01
Greater demands on forest resources require that larger amounts of information be readily available to decisionmakers. To provide more information faster, databases must be developed that are more comprehensive and easier to use. Data modeling is a process for building more complete and flexible databases by emphasizing fundamental relationships over existing or...
Guidelines for Planning the School Breakfast Program. Revised.
ERIC Educational Resources Information Center
Georgia State Dept. of Education, Atlanta. Office of School Administrative Services.
Some of the factors considered in these guidelines include basic nutritional requirements, food component minimums, food variety, and amounts of food served in elementary and secondary school breakfast programs. Suggestions are made for serving foods that will appeal to young people. Samples of hot and cold menus are provided. Forms for evaluating…
Bjerneld, Erik J; Johansson, Johan D; Laurin, Ylva; Hagner-McWhirter, Åsa; Rönn, Ola; Karlsson, Robert
2015-09-01
A pre-labeling protocol based on Cy5 N-hydroxysuccinimide (NHS) ester labeling of proteins has been developed for one-dimensional sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) analysis. We show that a fixed amount of sulfonated Cy5 can be used in the labeling reaction to label proteins over a broad concentration range-more than three orders of magnitude. The optimal amount of Cy5 was found to be 50 to 250pmol in 20μl using a Tris-HCl labeling buffer at pH 8.7. Labeling protein samples with a fixed amount of dye in this range balances the requirements of sub-nanogram detection sensitivity and low dye-to-protein (D/P) ratios for SDS-PAGE. Simulations of the labeling reaction reproduced experimental observations of both labeling kinetics and D/P ratios. Two-dimensional electrophoresis was used to examine the labeling of proteins in a cell lysate using both sulfonated and non-sulfonated Cy5. For both types of Cy5, we observed efficient labeling across a broad range of molecular weights and isoelectric points. Copyright © 2015 Elsevier Inc. All rights reserved.
Fast emulation of track reconstruction in the CMS simulation
NASA Astrophysics Data System (ADS)
Komm, Matthias; CMS Collaboration
2017-10-01
Simulated samples of various physics processes are a key ingredient within analyses to unlock the physics behind LHC collision data. Samples with more and more statistics are required to keep up with the increasing amounts of recorded data. During sample generation, significant computing time is spent on the reconstruction of charged particle tracks from energy deposits which additionally scales with the pileup conditions. In CMS, the FastSimulation package is developed for providing a fast alternative to the standard simulation and reconstruction workflow. It employs various techniques to emulate track reconstruction effects in particle collision events. Several analysis groups in CMS are utilizing the package, in particular those requiring many samples to scan the parameter space of physics models (e.g. SUSY) or for the purpose of estimating systematic uncertainties. The strategies for and recent developments in this emulation are presented, including a novel, flexible implementation of tracking emulation while retaining a sufficient, tuneable accuracy.
Liebi, Marianne; Georgiadis, Marios; Kohlbrecher, Joachim; Holler, Mirko; Raabe, Jörg; Usov, Ivan; Menzel, Andreas; Schneider, Philipp; Bunk, Oliver; Guizar-Sicairos, Manuel
2018-01-01
Small-angle X-ray scattering tensor tomography, which allows reconstruction of the local three-dimensional reciprocal-space map within a three-dimensional sample as introduced by Liebi et al. [Nature (2015), 527, 349-352], is described in more detail with regard to the mathematical framework and the optimization algorithm. For the case of trabecular bone samples from vertebrae it is shown that the model of the three-dimensional reciprocal-space map using spherical harmonics can adequately describe the measured data. The method enables the determination of nanostructure orientation and degree of orientation as demonstrated previously in a single momentum transfer q range. This article presents a reconstruction of the complete reciprocal-space map for the case of bone over extended ranges of q. In addition, it is shown that uniform angular sampling and advanced regularization strategies help to reduce the amount of data required.
Neural networks for the generation of sea bed models using airborne lidar bathymetry data
NASA Astrophysics Data System (ADS)
Kogut, Tomasz; Niemeyer, Joachim; Bujakiewicz, Aleksandra
2016-06-01
Various sectors of the economy such as transport and renewable energy have shown great interest in sea bed models. The required measurements are usually carried out by ship-based echo sounding, but this method is quite expensive. A relatively new alternative is data obtained by airborne lidar bathymetry. This study investigates the accuracy of these data, which was obtained in the context of the project `Investigation on the use of airborne laser bathymetry in hydrographic surveying'. A comparison to multi-beam echo sounding data shows only small differences in the depths values of the data sets. The IHO requirements of the total horizontal and vertical uncertainty for laser data are met. The second goal of this paper is to compare three spatial interpolation methods, namely Inverse Distance Weighting (IDW), Delaunay Triangulation (TIN), and supervised Artificial Neural Networks (ANN), for the generation of sea bed models. The focus of our investigation is on the amount of required sampling points. This is analyzed by manually reducing the data sets. We found that the three techniques have a similar performance almost independently of the amount of sampling data in our test area. However, ANN are more stable when using a very small subset of points.
Kirgiz, Irina A; Calloway, Cassandra
2017-04-01
Tape lifting and FTA paper scraping methods were directly compared to traditional double swabbing for collecting touch DNA from car steering wheels (n = 70 cars). Touch DNA was collected from the left or right side of each steering wheel (randomized) using two sterile cotton swabs, while the other side was sampled using water-soluble tape or FTA paper cards. DNA was extracted and quantified in duplicate using qPCR. Quantifiable amounts of DNA were detected for 100% of the samples (n = 140) collected independent of the method. However, the DNA collection yield was dependent on the collection method. A statistically significant difference in DNA yield was observed between FTA scraping and double swabbing methods (p = 0.0051), with FTA paper collecting a two-fold higher amount. Statistical analysis showed no significant difference in DNA yields between the double swabbing and tape lifting techniques (p = 0.21). Based on the DNA concentration required for 1 ng input, 47% of the samples collected using FTA paper would be expected to yield a short tandem repeat (STR) profile compared to 30% and 23% using double swabbing or tape, respectively. Further, 55% and 77% of the samples collected using double swabbing or tape, respectively, did not yield a high enough DNA concentration for the 0.5 ng of DNA input recommended for conventional STR kits and would be expected to result in a partial or no profile compared to 35% of the samples collected using FTA paper. STR analysis was conducted for a subset of the higher concentrated samples to confirm that the DNA collected from the steering wheel was from the driver. 32 samples were selected with DNA amounts of at least 1 ng total DNA (100 pg/μl when concentrated if required). A mixed STR profile was observed for 26 samples (88%) and the last driver was the major DNA contributor for 29 samples (94%). For one sample, the last driver was the minor DNA contributor. A full STR profile of the last driver was observed for 21 samples (69%) and a partial profile was observed for nine samples (25%); STR analysis failed for two samples collected using tape (6%). In conclusion, we show that the FTA paper scraping method has the potential to collect higher DNA yields from touch DNA evidence deposited on non-porous surfaces often encountered in criminal cases compared to conventional methods. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Modified electrokinetic sample injection method in chromatography and electrophoresis analysis
Davidson, J. Courtney; Balch, Joseph W.
2001-01-01
A sample injection method for horizontal configured multiple chromatography or electrophoresis units, each containing a number of separation/analysis channels, that enables efficient introduction of analyte samples. This method for loading when taken in conjunction with horizontal microchannels allows much reduced sample volumes and a means of sample stacking to greatly reduce the concentration of the sample. This reduction in the amount of sample can lead to great cost savings in sample preparation, particularly in massively parallel applications such as DNA sequencing. The essence of this method is in preparation of the input of the separation channel, the physical sample introduction, and subsequent removal of excess material. By this method, sample volumes of 100 nanoliter to 2 microliters have been used successfully, compared to the typical 5 microliters of sample required by the prior separation/analysis method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schad, Martina; Lipton, Mary S.; Giavalisco, Patrick
2005-07-14
Laser microdissection (LM) allows the collection of homogeneous tissue- and cell specific plant samples. The employment of this technique with subsequent protein analysis has thus far not been reported for plant tissues, probably due to the difficulties associated with defining a reasonable cellular morphology and, in parallel, allowing efficient protein extraction from tissue samples. The relatively large sample amount needed for successful proteome analysis is an additional issue that complicates protein profiling on a tissue- or even cell-specific level. In contrast to transcript profiling that can be performed from very small sample amounts due to efficient amplification strategies, there ismore » as yet no amplification procedure for proteins available. In the current study, we compared different tissue preparation techniques prior to LM/laser pressure catapulting (LMPC) with respect to their suitability for protein retrieval. Cryosectioning was identified as the best compromise between tissue morphology and effective protein extraction. After collection of vascular bundles from Arabidopsis thaliana stem tissue by LMPC, proteins were extracted and subjected to protein analysis, either by classical two-dimensional gel electrophoresis (2-DE), or by high-efficiency liquid chromatography (LC) in conjunction with tandem mass spectrometry (MS/MS). Our results demonstrate that both methods can be used with LMPC collected plant material. But because of the significantly lower sample amount required for LC-MS/MS than for 2-DE, the combination of LMPC and LC-MS/MS has a higher potential to promote comprehensive proteome analysis of specific plant tissues.« less
Michael, Costas; Bayona, Josep Maria; Lambropoulou, Dimitra; Agüera, Ana; Fatta-Kassinos, Despo
2017-06-01
Occurrence and effects of contaminants of emerging concern pose a special challenge to environmental scientists. The investigation of these effects requires reliable, valid, and comparable analytical data. To this effect, two critical aspects are raised herein, concerning the limitations of the produced analytical data. The first relates to the inherent difficulty that exists in the analysis of environmental samples, which is related to the lack of knowledge (information), in many cases, of the form(s) of the contaminant in which is present in the sample. Thus, the produced analytical data can only refer to the amount of the free contaminant ignoring the amount in which it may be present in other forms; e.g., as in chelated and conjugated form. The other important aspect refers to the way with which the spiking procedure is generally performed to determine the recovery of the analytical method. Spiking environmental samples, in particular solid samples, with standard solution followed by immediate extraction, as is the common practice, can lead to an overestimation of the recovery. This is so, because no time is given to the system to establish possible equilibria between the solid matter-inorganic and/or organic-and the contaminant. Therefore, the spiking procedure need to be reconsidered by including a study of the extractable amount of the contaminant versus the time elapsed between spiking and the extraction of the sample. This study can become an element of the validation package of the method.
Scheibe, Andrea; Krantz, Lars; Gleixner, Gerd
2012-01-30
We assessed the accuracy and utility of a modified high-performance liquid chromatography/isotope ratio mass spectrometry (HPLC/IRMS) system for measuring the amount and stable carbon isotope signature of dissolved organic matter (DOM) <1 µm. Using a range of standard compounds as well as soil solutions sampled in the field, we compared the results of the HPLC/IRMS analysis with those from other methods for determining carbon and (13)C content. The conversion efficiency of the in-line wet oxidation of the HPLC/IRMS averaged 99.3% for a range of standard compounds. The agreement between HPLC/IRMS and other methods in the amount and isotopic signature of both standard compounds and soil water samples was excellent. For DOM concentrations below 10 mg C L(-1) (250 ng C total) pre-concentration or large volume injections are recommended in order to prevent background interferences. We were able to detect large differences in the (13)C signatures of soil solution DOM sampled in 10 cm depth of plots with either C3 or C4 vegetation and in two different parent materials. These measurements also demonstrated changes in the (13)C signature that demonstrate rapid loss of plant-derived C with depth. Overall the modified HLPC/IRMS system has the advantages of rapid sample preparation, small required sample volume and high sample throughput, while showing comparable performance with other methods for measuring the amount and isotopic signature of DOM. Copyright © 2011 John Wiley & Sons, Ltd.
1998-09-25
The Food and Drug Administration (FDA) is proposing to amend its regulations regarding the collection of twice the quantity of food, drug, or cosmetic estimated to be sufficient for analysis. This action increases the dollar amount that FDA will consider to determine whether to routinely collect a reserve sample of a food, drug, or cosmetic product in addition to the quantity sufficient for analysis. Experience has demonstrated that the current dollar amount does not adequately cover the cost of most quantities sufficient for analysis plus reserve samples. This proposed rule is a companion to the direct final rule published elsewhere in this issue of the Federal Register. This action is part of FDA's continuing effort to achieve the objectives of the President's "Reinventing Government" initiative, and it is intended to reduce the burden of unnecessary regulations on food, drugs, and cosmetics without diminishing the protection of the public health.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-04
... Means of Land Application, Landfill, and Surface Disposal in the EPA Region 8 AGENCY: Environmental..., treat, and/or use/dispose of sewage sludge by means of land application, landfill, and surface disposal... landfill. The purpose is to require agronomic soil sampling for calculating the proper amount of sewage...
Singh, Pramila; DeMarini, David M; Dick, Colin A J; Tabor, Dennis G; Ryan, Jeff V; Linak, William P; Kobayashi, Takahiro; Gilmour, M Ian
2004-06-01
Two samples of diesel exhaust particles (DEPs) predominate in health effects research: an automobile-derived DEP (A-DEP) sample and the National Institute of Standards Technology standard reference material (SRM 2975) generated from a forklift engine. A-DEPs have been tested extensively for their effects on pulmonary inflammation and exacerbation of allergic asthmalike responses. In contrast, SRM 2975 has been tested thoroughly for its genotoxicity. In the present study, we combined physical and chemical analyses of both DEP samples with pulmonary toxicity testing in CD-1 mice to compare the two materials and to make associations between their physicochemical properties and their biologic effects. A-DEPs had more than 10 times the amount of extractable organic material and less than one-sixth the amount of elemental carbon compared with SRM 2975. Aspiration of 100 micro g of either DEP sample in saline produced mild acute lung injury; however, A-DEPs induced macrophage influx and activation, whereas SRM 2975 enhanced polymorphonuclear cell inflammation. A-DEPs stimulated an increase in interleukin-6 (IL-6), tumor necrosis factor alpha, macrophage inhibitory protein-2, and the TH2 cytokine IL-5, whereas SRM 2975 only induced significant levels of IL-6. Fractionated organic extracts of the same quantity of DEPs (100 micro g) did not have a discernable effect on lung responses and will require further study. The disparate results obtained highlight the need for chemical, physical, and source characterization of particle samples under investigation. Multidisciplinary toxicity testing of diesel emissions derived from a variety of generation and collection conditions is required to meaningfully assess the health hazards associated with exposures to DEPs. Key words: automobile, diesel exhaust particles, forklift, mice, pulmonary toxicity, SRM 2975.
Singh, Pramila; DeMarini, David M; Dick, Colin A J; Tabor, Dennis G; Ryan, Jeff V; Linak, William P; Kobayashi, Takahiro; Gilmour, M Ian
2004-01-01
Two samples of diesel exhaust particles (DEPs) predominate in health effects research: an automobile-derived DEP (A-DEP) sample and the National Institute of Standards Technology standard reference material (SRM 2975) generated from a forklift engine. A-DEPs have been tested extensively for their effects on pulmonary inflammation and exacerbation of allergic asthmalike responses. In contrast, SRM 2975 has been tested thoroughly for its genotoxicity. In the present study, we combined physical and chemical analyses of both DEP samples with pulmonary toxicity testing in CD-1 mice to compare the two materials and to make associations between their physicochemical properties and their biologic effects. A-DEPs had more than 10 times the amount of extractable organic material and less than one-sixth the amount of elemental carbon compared with SRM 2975. Aspiration of 100 micro g of either DEP sample in saline produced mild acute lung injury; however, A-DEPs induced macrophage influx and activation, whereas SRM 2975 enhanced polymorphonuclear cell inflammation. A-DEPs stimulated an increase in interleukin-6 (IL-6), tumor necrosis factor alpha, macrophage inhibitory protein-2, and the TH2 cytokine IL-5, whereas SRM 2975 only induced significant levels of IL-6. Fractionated organic extracts of the same quantity of DEPs (100 micro g) did not have a discernable effect on lung responses and will require further study. The disparate results obtained highlight the need for chemical, physical, and source characterization of particle samples under investigation. Multidisciplinary toxicity testing of diesel emissions derived from a variety of generation and collection conditions is required to meaningfully assess the health hazards associated with exposures to DEPs. Key words: automobile, diesel exhaust particles, forklift, mice, pulmonary toxicity, SRM 2975. PMID:15175167
NanoDrop Microvolume Quantitation of Nucleic Acids
Desjardins, Philippe; Conklin, Deborah
2010-01-01
Biomolecular assays are continually being developed that use progressively smaller amounts of material, often precluding the use of conventional cuvette-based instruments for nucleic acid quantitation for those that can perform microvolume quantitation. The NanoDrop microvolume sample retention system (Thermo Scientific NanoDrop Products) functions by combining fiber optic technology and natural surface tension properties to capture and retain minute amounts of sample independent of traditional containment apparatus such as cuvettes or capillaries. Furthermore, the system employs shorter path lengths, which result in a broad range of nucleic acid concentration measurements, essentially eliminating the need to perform dilutions. Reducing the volume of sample required for spectroscopic analysis also facilitates the inclusion of additional quality control steps throughout many molecular workflows, increasing efficiency and ultimately leading to greater confidence in downstream results. The need for high-sensitivity fluorescent analysis of limited mass has also emerged with recent experimental advances. Using the same microvolume sample retention technology, fluorescent measurements may be performed with 2 μL of material, allowing fluorescent assays volume requirements to be significantly reduced. Such microreactions of 10 μL or less are now possible using a dedicated microvolume fluorospectrometer. Two microvolume nucleic acid quantitation protocols will be demonstrated that use integrated sample retention systems as practical alternatives to traditional cuvette-based protocols. First, a direct A260 absorbance method using a microvolume spectrophotometer is described. This is followed by a demonstration of a fluorescence-based method that enables reduced-volume fluorescence reactions with a microvolume fluorospectrometer. These novel techniques enable the assessment of nucleic acid concentrations ranging from 1 pg/ μL to 15,000 ng/ μL with minimal consumption of sample. PMID:21189466
Shi, C; Gao, S; Gun, S
1997-06-01
The sample is digested with 6% NaOH solution and an amount of 50 microl is used for protein content analysis by the method of Comassie Brilliant Blue G250, the residual is diluted with equal 0.4% Lathanurm-EDTA solution. Its Calcium magensium and potassium content are determined by AAS. With quick-pulsed nebulization technique. When a self-made micro-sampling device is used, 20microl of sample volume is needed and it is only the 1/10 approximately 1/20 of the sample volume required for conventional determination. Sensitivity, precision and rate of recovery agree well with those using regular wet ashing method.
Stability and bias of classification rates in biological applications of discriminant analysis
Williams, B.K.; Titus, K.; Hines, J.E.
1990-01-01
We assessed the sampling stability of classification rates in discriminant analysis by using a factorial design with factors for multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. Simulation results indicated strong bias in correct classification rates when group sample sizes were small and when overlap among groups was high. We also found that stability of the correct classification rates was influenced by these factors, indicating that the number of samples required for a given level of precision increases with the amount of overlap among groups. In a review of 60 published studies, we found that 57% of the articles presented results on classification rates, though few of them mentioned potential biases in their results. Wildlife researchers should choose the total number of samples per group to be at least 2 times the number of variables to be measured when overlap among groups is low. Substantially more samples are required as the overlap among groups increases
Toward accelerating landslide mapping with interactive machine learning techniques
NASA Astrophysics Data System (ADS)
Stumpf, André; Lachiche, Nicolas; Malet, Jean-Philippe; Kerle, Norman; Puissant, Anne
2013-04-01
Despite important advances in the development of more automated methods for landslide mapping from optical remote sensing images, the elaboration of inventory maps after major triggering events still remains a tedious task. Image classification with expert defined rules typically still requires significant manual labour for the elaboration and adaption of rule sets for each particular case. Machine learning algorithm, on the contrary, have the ability to learn and identify complex image patterns from labelled examples but may require relatively large amounts of training data. In order to reduce the amount of required training data active learning has evolved as key concept to guide the sampling for applications such as document classification, genetics and remote sensing. The general underlying idea of most active learning approaches is to initialize a machine learning model with a small training set, and to subsequently exploit the model state and/or the data structure to iteratively select the most valuable samples that should be labelled by the user and added in the training set. With relatively few queries and labelled samples, an active learning strategy should ideally yield at least the same accuracy than an equivalent classifier trained with many randomly selected samples. Our study was dedicated to the development of an active learning approach for landslide mapping from VHR remote sensing images with special consideration of the spatial distribution of the samples. The developed approach is a region-based query heuristic that enables to guide the user attention towards few compact spatial batches rather than distributed points resulting in time savings of 50% and more compared to standard active learning techniques. The approach was tested with multi-temporal and multi-sensor satellite images capturing recent large scale triggering events in Brazil and China and demonstrated balanced user's and producer's accuracies between 74% and 80%. The assessment also included an experimental evaluation of the uncertainties of manual mappings from multiple experts and demonstrated strong relationships between the uncertainty of the experts and the machine learning model.
Mirhashemi, AmirHossein; Jahangiri, Sahar; Kharrazifard, MohammadJavad
2018-02-05
Corrosion resistance is an important requirement for orthodontic appliances. Nickel and chromium may be released from orthodontic wires and can cause allergic reactions and cytotoxicity when patients use various mouthwashes to whiten their teeth. Our study aimed to assess the release of nickel and chromium ions from nickel titanium (NiTi) and stainless steel (SS) orthodontic wires following the use of four common mouthwashes available on the market. This in vitro, experimental study was conducted on 120 orthodontic appliances for one maxillary quadrant including five brackets, one band and half of the required length of SS, and NiTi wires. The samples were immersed in Oral B, Oral B 3D White Luxe, Listerine, and Listerine Advance White for 1, 6, 24, and 168 h. The samples immersed in distilled water served as the control group. Atomic absorption spectroscopy served to quantify the amount of released ions. Nickel ions were released from both wires at all time-points; the highest amount was in Listerine and the lowest in Oral B mouthwashes. The remaining two solutions were in-between this range. The process of release of chromium from the SS wire was the same as that of nickel. However, the release trend in NiTi wires was not uniform. Listerine caused the highest release of ions. Listerine Advance White, Oral B 3D White Luxe, and distilled water were the same in terms of ion release. Oral B showed the lowest amount of ion release.
7 CFR 58.938 - Physical requirements and microbiological limits for sweetened condensed milk.
Code of Federal Regulations, 2013 CFR
2013-01-01
... retain a definite form. (e) Microbiological limits. (1) Coliforms, less than 10 per gram; (2) yeasts, less than 5 per gram; (3) molds, less than 5 per gram; (4) total plate count, less than 1,000 per gram... amount of sediment retained on a lintine disc after a sample composed of 225 grams of product dissolved...
40 CFR 761.353 - Second level of sample selection.
Code of Federal Regulations, 2012 CFR
2012-07-01
... reduction is to limit the amount of time required to manually cut up larger particles of the waste to pass through a 9.5 millimeter (mm) screen. (a) Selecting a portion of the subsample for particle size reduction... table to select one of these quarters. (b) Reduction of the particle size by the use of a 9.5 mm screen...
40 CFR 761.353 - Second level of sample selection.
Code of Federal Regulations, 2014 CFR
2014-07-01
... reduction is to limit the amount of time required to manually cut up larger particles of the waste to pass through a 9.5 millimeter (mm) screen. (a) Selecting a portion of the subsample for particle size reduction... table to select one of these quarters. (b) Reduction of the particle size by the use of a 9.5 mm screen...
40 CFR 761.353 - Second level of sample selection.
Code of Federal Regulations, 2013 CFR
2013-07-01
... reduction is to limit the amount of time required to manually cut up larger particles of the waste to pass through a 9.5 millimeter (mm) screen. (a) Selecting a portion of the subsample for particle size reduction... table to select one of these quarters. (b) Reduction of the particle size by the use of a 9.5 mm screen...
Accelerated spike resampling for accurate multiple testing controls.
Harrison, Matthew T
2013-02-01
Controlling for multiple hypothesis tests using standard spike resampling techniques often requires prohibitive amounts of computation. Importance sampling techniques can be used to accelerate the computation. The general theory is presented, along with specific examples for testing differences across conditions using permutation tests and for testing pairwise synchrony and precise lagged-correlation between many simultaneously recorded spike trains using interval jitter.
Mousseau, F; Vitorazi, L; Herrmann, L; Mornet, S; Berret, J-F
2016-08-01
The electrostatic charge density of particles is of paramount importance for the control of the dispersion stability. Conventional methods use potentiometric, conductometric or turbidity titration but require large amount of samples. Here we report a simple and cost-effective method called polyelectrolyte assisted charge titration spectrometry or PACTS. The technique takes advantage of the propensity of oppositely charged polymers and particles to assemble upon mixing, leading to aggregation or phase separation. The mixed dispersions exhibit a maximum in light scattering as a function of the volumetric ratio X, and the peak position XMax is linked to the particle charge density according to σ∼D0XMax where D0 is the particle diameter. The PACTS is successfully applied to organic latex, aluminum and silicon oxide particles of positive or negative charge using poly(diallyldimethylammonium chloride) and poly(sodium 4-styrenesulfonate). The protocol is also optimized with respect to important parameters such as pH and concentration, and to the polyelectrolyte molecular weight. The advantages of the PACTS technique are that it requires minute amounts of sample and that it is suitable to a broad variety of charged nano-objects. Copyright © 2016 Elsevier Inc. All rights reserved.
Yin, Hong-Rui; Zhang, Lei; Xie, Li-Qi; Huang, Li-Yong; Xu, Ye; Cai, San-Jun; Yang, Peng-Yuan; Lu, Hao-Jie
2013-09-06
Novel biomarker verification assays are urgently required to improve the efficiency of biomarker development. Benefitting from lower development costs, multiple reaction monitoring (MRM) has been used for biomarker verification as an alternative to immunoassay. However, in general MRM analysis, only one sample can be quantified in a single experiment, which restricts its application. Here, a Hyperplex-MRM quantification approach, which combined mTRAQ for absolute quantification and iTRAQ for relative quantification, was developed to increase the throughput of biomarker verification. In this strategy, equal amounts of internal standard peptides were labeled with mTRAQ reagents Δ0 and Δ8, respectively, as double references, while 4-plex iTRAQ reagents were used to label four different samples as an alternative to mTRAQ Δ4. From the MRM trace and MS/MS spectrum, total amounts and relative ratios of target proteins/peptides of four samples could be acquired simultaneously. Accordingly, absolute amounts of target proteins/peptides in four different samples could be achieved in a single run. In addition, double references were used to increase the reliability of the quantification results. Using this approach, three biomarker candidates, ademosylhomocysteinase (AHCY), cathepsin D (CTSD), and lysozyme C (LYZ), were successfully quantified in colorectal cancer (CRC) tissue specimens of different stages with high accuracy, sensitivity, and reproducibility. To summarize, we demonstrated a promising quantification method for high-throughput verification of biomarker candidates.
Krivosheeva, Olga; Dedinaite, Andra; Claesson, Per M
2013-10-15
Mussel adhesive proteins are of great interest in many applications due to their ability to bind strongly to many types of surfaces under water. Effective use such proteins, for instance the Mytilus edulis foot protein - Mefp-1, for surface modification requires achievement of a large adsorbed amount and formation of a layer that is resistant towards desorption under changing conditions. In this work we compare the adsorbed amount and layer properties obtained by using a sample containing small Mefp-1 aggregates with that obtained by using a non-aggregated sample. We find that the use of the sample containing small aggregates leads to higher adsorbed amount, larger layer thickness and similar water content compared to what can be achieved with a non-aggregated sample. The layer formed by the aggregated Mefp-1 was, after removal of the protein from bulk solution, exposed to aqueous solutions with high ionic strength (up to 1M NaCl) and to solutions with low pH in order to reduce the electrostatic surface affinity. It was found that the preadsorbed Mefp-1 layer under all conditions explored was significantly more resistant towards desorption than a layer built by a synthetic cationic polyelectrolyte with similar charge density. These results suggest that the non-electrostatic surface affinity for Mefp-1 is larger than for the cationic polyelectrolyte. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
Automated blood-sample handling in the clinical laboratory.
Godolphin, W; Bodtker, K; Uyeno, D; Goh, L O
1990-09-01
The only significant advances in blood-taking in 25 years have been the disposable needle and evacuated blood-drawing tube. With the exception of a few isolated barcode experiments, most sample-tracking is performed through handwritten or computer-printed labels. Attempts to reduce the hazards of centrifugation have resulted in air-tight lids or chambers, the use of which is time-consuming and cumbersome. Most commonly used clinical analyzers require serum or plasma, distributed into specialized containers, unique to that analyzer. Aliquots for different tests are prepared by handpouring or pipetting. Moderate to large clinical laboratories perform so many different tests that even multi-analyzers performing multiple analyses on a single sample may account for only a portion of all tests ordered for a patient. Thus several aliquots of each specimen are usually required. We have developed a proprietary serial centrifuge and blood-collection tube suitable for incorporation into an automated or robotic sample-handling system. The system we propose is (a) safe--avoids or prevents biological danger to the many "handlers" of blood; (b) small--minimizes the amount of sample taken and space required to adapt to the needs of satellite and mobile testing, and direct interfacing with analyzers; (c) serial--permits each sample to be treated according to its own "merits," optimizes throughput, and facilitates flexible automation; and (d) smart--ensures quality results through monitoring and intelligent control of patient identification, sample characteristics, and separation process.
Evaluation of Two Matrices for Long-Term, Ambient Storage of Bacterial DNA.
Miernyk, Karen M; DeByle, Carolynn K; Rudolph, Karen M
2017-12-01
Culture-independent molecular analyses allow researchers to identify diverse microorganisms. This approach requires microbiological DNA repositories. The standard for DNA storage is liquid nitrogen or ultralow freezers. These use large amounts of space, are costly to operate, and could fail. Room temperature DNA storage is a viable alternative. In this study, we investigated storage of bacterial DNA using two ambient storage matrices, Biomatrica DNAstable ® Plus and GenTegra ® DNA. We created crude and clean DNA extracts from five Streptococcus pneumoniae isolates. Extracts were stored at -30°C (our usual DNA storage temperature), 25°C (within the range of temperatures recommended for the products), and 50°C (to simulate longer storage time). Samples were stored at -30°C with no product and dried at 25°C and 50°C with no product, in Biomatrica DNAstable Plus or GenTegra DNA. We analyzed the samples after 0, 1, 2, 4, 8, 16, 32, and 64 weeks using the Nanodrop 1000 to determine the amount of DNA in each aliquot and by real-time PCR for the S. pneumoniae genes lytA and psaA. Using a 50°C storage temperature, we simulated 362 weeks of 25°C storage. The average amount of DNA in aliquots stored with a stabilizing matrix was 103%-116% of the original amount added to the tubes. This is similar to samples stored at -30°C (average 102%-121%). With one exception, samples stored with a stabilizing matrix had no change in lytA or psaA cycle threshold (Ct) value over time (Ct range ≤2.9), similar to samples stored at -30°C (Ct range ≤3.0). Samples stored at 25°C with no stabilizing matrix had Ct ranges of 2.2-5.1. DNAstable Plus and GenTegra DNA can protect dried bacterial DNA samples stored at room temperature with similar effectiveness as at -30°C. It is not effective to store bacterial DNA at room temperature without a stabilizing matrix.
Determination of molybenum in soils and rocks: A geochemical semimicro field method
Ward, F.N.
1951-01-01
Reconnaissance work in geochemical prospecting requires a simple, rapid, and moderately accurate method for the determination of small amounts of molybdenum in soils and rocks. The useful range of the suggested procedure is from 1 to 32 p.p.m. of molybdenum, but the upper limit can be extended. Duplicate determinations on eight soil samples containing less than 10 p.p.m. of molybdenum agree within 1 p.p.m., and a comparison of field results with those obtained by a conventional laboratory procedure shows that the method is sufficiently accurate for use in geochemical prospecting. The time required for analysis and the quantities of reagents needed have been decreased to provide essentially a "test tube" method for the determination of molybdenum in soils and rocks. With a minimum amount of skill, one analyst can make 30 molybdenum determinations in an 8-hour day.
NASA Technical Reports Server (NTRS)
Nakamura-Messenger, K.; Connolly, H. C., Jr.; Lauretta, D. S.
2014-01-01
OSRIS-REx is NASA's New Frontiers 3 sample return mission that will return at least 60 g of pristine surface material from near-Earth asteroid 101955 Bennu in September 2023. The scientific value of the sample increases enormously with the amount of knowledge captured about the geological context from which the sample is collected. The OSIRIS-REx spacecraft is highly maneuverable and capable of investigating the surface of Bennu at scales down to the sub-cm. The OSIRIS-REx instruments will characterize the overall surface geology including spectral properties, microtexture, and geochemistry of the regolith at the sampling site in exquisite detail for up to 505 days after encountering Bennu in August 2018. The mission requires at the very minimum one acceptable location on the asteroid where a touch-and-go (TAG) sample collection maneuver can be successfully per-formed. Sample site selection requires that the follow-ing maps be produced: Safety, Deliverability, Sampleability, and finally Science Value. If areas on the surface are designated as safe, navigation can fly to them, and they have ingestible regolith, then the scientific value of one site over another will guide site selection.
Pseudotargeted MS Method for the Sensitive Analysis of Protein Phosphorylation in Protein Complexes.
Lyu, Jiawen; Wang, Yan; Mao, Jiawei; Yao, Yating; Wang, Shujuan; Zheng, Yong; Ye, Mingliang
2018-05-15
In this study, we presented an enrichment-free approach for the sensitive analysis of protein phosphorylation in minute amounts of samples, such as purified protein complexes. This method takes advantage of the high sensitivity of parallel reaction monitoring (PRM). Specifically, low confident phosphopeptides identified from the data-dependent acquisition (DDA) data set were used to build a pseudotargeted list for PRM analysis to allow the identification of additional phosphopeptides with high confidence. The development of this targeted approach is very easy as the same sample and the same LC-system were used for the discovery and the targeted analysis phases. No sample fractionation or enrichment was required for the discovery phase which allowed this method to analyze minute amount of sample. We applied this pseudotargeted MS method to quantitatively examine phosphopeptides in affinity purified endogenous Shc1 protein complexes at four temporal stages of EGF signaling and identified 82 phospho-sites. To our knowledge, this is the highest number of phospho-sites identified from the protein complexes. This pseudotargeted MS method is highly sensitive in the identification of low abundance phosphopeptides and could be a powerful tool to study phosphorylation-regulated assembly of protein complex.
A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berres, Anne Sabine; Adhinarayanan, Vignesh; Turton, Terece
2017-05-12
Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline atmore » the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.« less
Comparison of preprocessing methods and storage times for touch DNA samples
Dong, Hui; Wang, Jing; Zhang, Tao; Ge, Jian-ye; Dong, Ying-qiang; Sun, Qi-fan; Liu, Chao; Li, Cai-xia
2017-01-01
Aim To select appropriate preprocessing methods for different substrates by comparing the effects of four different preprocessing methods on touch DNA samples and to determine the effect of various storage times on the results of touch DNA sample analysis. Method Hand touch DNA samples were used to investigate the detection and inspection results of DNA on different substrates. Four preprocessing methods, including the direct cutting method, stubbing procedure, double swab technique, and vacuum cleaner method, were used in this study. DNA was extracted from mock samples with four different preprocessing methods. The best preprocess protocol determined from the study was further used to compare performance after various storage times. DNA extracted from all samples was quantified and amplified using standard procedures. Results The amounts of DNA and the number of alleles detected on the porous substrates were greater than those on the non-porous substrates. The performances of the four preprocessing methods varied with different substrates. The direct cutting method displayed advantages for porous substrates, and the vacuum cleaner method was advantageous for non-porous substrates. No significant degradation trend was observed as the storage times increased. Conclusion Different substrates require the use of different preprocessing method in order to obtain the highest DNA amount and allele number from touch DNA samples. This study provides a theoretical basis for explorations of touch DNA samples and may be used as a reference when dealing with touch DNA samples in case work. PMID:28252870
NASA Astrophysics Data System (ADS)
Cattaneo, Alessandro; Park, Gyuhae; Farrar, Charles; Mascareñas, David
2012-04-01
The acoustic emission (AE) phenomena generated by a rapid release in the internal stress of a material represent a promising technique for structural health monitoring (SHM) applications. AE events typically result in a discrete number of short-time, transient signals. The challenge associated with capturing these events using classical techniques is that very high sampling rates must be used over extended periods of time. The result is that a very large amount of data is collected to capture a phenomenon that rarely occurs. Furthermore, the high energy consumption associated with the required high sampling rates makes the implementation of high-endurance, low-power, embedded AE sensor nodes difficult to achieve. The relatively rare occurrence of AE events over long time scales implies that these measurements are inherently sparse in the spike domain. The sparse nature of AE measurements makes them an attractive candidate for the application of compressed sampling techniques. Collecting compressed measurements of sparse AE signals will relax the requirements on the sampling rate and memory demands. The focus of this work is to investigate the suitability of compressed sensing techniques for AE-based SHM. The work explores estimating AE signal statistics in the compressed domain for low-power classification applications. In the event compressed classification finds an event of interest, ι1 norm minimization will be used to reconstruct the measurement for further analysis. The impact of structured noise on compressive measurements is specifically addressed. The suitability of a particular algorithm, called Justice Pursuit, to increase robustness to a small amount of arbitrary measurement corruption is investigated.
Hanson, Erin K; Ballantyne, Jack
2016-01-01
In some cases of sexual assault the victim may not report the assault for several days after the incident due to various factors. The ability to obtain an autosomal STR profile of the semen donor from a living victim rapidly diminishes as the post-coital interval is extended due to the presence of only a small amount of male DNA amidst an overwhelming amount of female DNA. Previously, we have utilized various technological tools to overcome the limitations of male DNA profiling in extended interval post-coital samples including the use of Y-chromosome STR profiling, cervical sample, and post-PCR purification permitting the recovery of Y-STR profiles of the male DNA from samples collected 5-6 days after intercourse. Despite this success, the reproductive biology literature reports the presence of spermatozoa in the human cervix up to 7-10 days post-coitus. Therefore, novel and improved methods for recovery of male profiles in extended interval post-coital samples were required. Here, we describe enhanced strategies, including Y-chromosome-targeted pre-amplification and next generation Y-STR amplification kits, that have resulted in the ability to obtain probative male profiles from samples collected 6-9 days after intercourse.
Radioimmunoassays for the serum thymic factor (FTS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, B.W.; Fok, K.F.; Incefy, G.S.
1987-01-06
This patent describes a radioimmunoassay of serum thymic factor (FTS) in a test sample, comprising: (a) contacting a first aliquot of the sample with anti-FTS antibody, with a known amount of FTS hormone standard and a known amount of radiolabeled FTS analogue; and (b) contacting a second aliquot of the sample with anti-FTS antibody and a known amount of radiolabeled FTS analogue; and (c) measuring the radioactivity of the antigen-antibody complex in each aliquot; and (d) calculating the amount of FTS in the test sample.
Ramos, Enrique; Levinson, Benjamin T; Chasnoff, Sara; Hughes, Andrew; Young, Andrew L; Thornton, Katherine; Li, Allie; Vallania, Francesco L M; Province, Michael; Druley, Todd E
2012-12-06
Rare genetic variation in the human population is a major source of pathophysiological variability and has been implicated in a host of complex phenotypes and diseases. Finding disease-related genes harboring disparate functional rare variants requires sequencing of many individuals across many genomic regions and comparing against unaffected cohorts. However, despite persistent declines in sequencing costs, population-based rare variant detection across large genomic target regions remains cost prohibitive for most investigators. In addition, DNA samples are often precious and hybridization methods typically require large amounts of input DNA. Pooled sample DNA sequencing is a cost and time-efficient strategy for surveying populations of individuals for rare variants. We set out to 1) create a scalable, multiplexing method for custom capture with or without individual DNA indexing that was amenable to low amounts of input DNA and 2) expand the functionality of the SPLINTER algorithm for calling substitutions, insertions and deletions across either candidate genes or the entire exome by integrating the variant calling algorithm with the dynamic programming aligner, Novoalign. We report methodology for pooled hybridization capture with pre-enrichment, indexed multiplexing of up to 48 individuals or non-indexed pooled sequencing of up to 92 individuals with as little as 70 ng of DNA per person. Modified solid phase reversible immobilization bead purification strategies enable no sample transfers from sonication in 96-well plates through adapter ligation, resulting in 50% less library preparation reagent consumption. Custom Y-shaped adapters containing novel 7 base pair index sequences with a Hamming distance of ≥2 were directly ligated onto fragmented source DNA eliminating the need for PCR to incorporate indexes, and was followed by a custom blocking strategy using a single oligonucleotide regardless of index sequence. These results were obtained aligning raw reads against the entire genome using Novoalign followed by variant calling of non-indexed pools using SPLINTER or SAMtools for indexed samples. With these pipelines, we find sensitivity and specificity of 99.4% and 99.7% for pooled exome sequencing. Sensitivity, and to a lesser degree specificity, proved to be a function of coverage. For rare variants (≤2% minor allele frequency), we achieved sensitivity and specificity of ≥94.9% and ≥99.99% for custom capture of 2.5 Mb in multiplexed libraries of 22-48 individuals with only ≥5-fold coverage/chromosome, but these parameters improved to ≥98.7 and 100% with 20-fold coverage/chromosome. This highly scalable methodology enables accurate rare variant detection, with or without individual DNA sample indexing, while reducing the amount of required source DNA and total costs through less hybridization reagent consumption, multi-sample sonication in a standard PCR plate, multiplexed pre-enrichment pooling with a single hybridization and lesser sequencing coverage required to obtain high sensitivity.
Design Modification of Electrophoretic Equipment
NASA Technical Reports Server (NTRS)
Reddick, J. M.; Hirsch, I.
1973-01-01
The improved design of a zone electrophoretic sampler is reported that can be used in mass screening for hemoglobin S, the cause of sickle cell anemia. Considered is a high voltage multicell cellulose acetate device that requires 5 to 6 minutes electrophoresis periods; cells may be activitated individually or simultaneously. A multisample hemoglobin applicator standardizes the amount of sample applied and transfers the homolysate to the electrical wires.
Active Learning in the Classroom: A Muscle Identification Game in a Kinesiology Course
ERIC Educational Resources Information Center
McCarroll, Michele L.; Pohle-Krauza, Rachael J.; Martin, Jennifer L.
2009-01-01
It is often difficult for educators to teach a kinesiology and applied anatomy (KAA) course due to the vast amount of information that students are required to learn. In this study, a convenient sample of students ("class A") from one section of a KAA course played the speed muscle introduction and matching game, which is loosely based off the…
Code of Federal Regulations, 2010 CFR
2010-07-01
...) The term Marijuana is as defined in 21 U.S.C. 801(15) but does not include, for the purposes of this... therein listed, except marijuana, shall be twice the minimum amount required for the most severe mandatory... of marijuana a quantity in excess of the representative sample are seized pursuant to a criminal...
Code of Federal Regulations, 2014 CFR
2014-07-01
...) The term Marijuana is as defined in 21 U.S.C. 801(15) but does not include, for the purposes of this... therein listed, except marijuana, shall be twice the minimum amount required for the most severe mandatory... of marijuana a quantity in excess of the representative sample are seized pursuant to a criminal...
Code of Federal Regulations, 2011 CFR
2011-07-01
...) The term Marijuana is as defined in 21 U.S.C. 801(15) but does not include, for the purposes of this... therein listed, except marijuana, shall be twice the minimum amount required for the most severe mandatory... of marijuana a quantity in excess of the representative sample are seized pursuant to a criminal...
Code of Federal Regulations, 2013 CFR
2013-07-01
...) The term Marijuana is as defined in 21 U.S.C. 801(15) but does not include, for the purposes of this... therein listed, except marijuana, shall be twice the minimum amount required for the most severe mandatory... of marijuana a quantity in excess of the representative sample are seized pursuant to a criminal...
Code of Federal Regulations, 2012 CFR
2012-07-01
...) The term Marijuana is as defined in 21 U.S.C. 801(15) but does not include, for the purposes of this... therein listed, except marijuana, shall be twice the minimum amount required for the most severe mandatory... of marijuana a quantity in excess of the representative sample are seized pursuant to a criminal...
Amendment to examination and investigation sample requirements--FDA. Direct final rule.
1998-09-25
The Food and Drug Administration (FDA) is amending its regulations regarding the collection of twice the quantity of food, drug, or cosmetic estimated to be sufficient for analysis. This action increases the dollar amount that FDA will consider to determine whether to routinely collect a reserve sample of a food, drug, or cosmetic product in addition to the quantity sufficient for analysis. Experience has demonstrated that the current dollar amount does not adequately cover the cost of most quantities sufficient for analysis plus reserve samples. This direct final rule is part of FDA's continuing effort to achieve the objectives of the President's "Reinventing Government" initiative, and is intended to reduce the burden of unnecessary regulations on food, drugs, and cosmetics without diminishing the protection of the public health. Elsewhere in this issue of the Federal Register, FDA is publishing a companion proposed rule under FDA's usual procedures for notice and comment to provide a procedural framework to finalize the rule in the event the agency receives any significant adverse comment and withdraws this direct final rule.
A system architecture for online data interpretation and reduction in fluorescence microscopy
NASA Astrophysics Data System (ADS)
Röder, Thorsten; Geisbauer, Matthias; Chen, Yang; Knoll, Alois; Uhl, Rainer
2010-01-01
In this paper we present a high-throughput sample screening system that enables real-time data analysis and reduction for live cell analysis using fluorescence microscopy. We propose a novel system architecture capable of analyzing a large amount of samples during the experiment and thus greatly minimizing the post-analysis phase that is the common practice today. By utilizing data reduction algorithms, relevant information of the target cells is extracted from the online collected data stream, and then used to adjust the experiment parameters in real-time, allowing the system to dynamically react on changing sample properties and to control the microscope setup accordingly. The proposed system consists of an integrated DSP-FPGA hybrid solution to ensure the required real-time constraints, to execute efficiently the underlying computer vision algorithms and to close the perception-action loop. We demonstrate our approach by addressing the selective imaging of cells with a particular combination of markers. With this novel closed-loop system the amount of superfluous collected data is minimized, while at the same time the information entropy increases.
Dawidowicz, Andrzej L; Wianowska, Dorota
2005-04-29
Pressurised liquid extraction (PLE) is recognised as one of the most effective sample preparation methods. Despite the enhanced extraction power of PLE, the full recovery of an analyte from plant material may require multiple extractions of the same sample. The presented investigations show the possibility of estimating the true concentration value of an analyte in plant material employing one-cycle PLE in which plant samples of different weight are used. The performed experiments show a linear dependence between the reciprocal value of the analyte amount (E*), extracted in single-step PLE from a plant matrix, and the ratio of plant material mass to extrahent volume (m(p)/V(s)). Hence, time-consuming multi-step PLE can be replaced by a few single-step PLEs performed at different (m(p)/V(s)) ratios. The concentrations of rutin in Sambucus nigra L. and caffeine in tea and coffee estimated by means of the tested procedure are almost the same as their concentrations estimated by multiple PLE.
Li, Hui; Wang, Siyuan; Huang, Zhongliang; Yuan, Xingzhong; Wang, Ziliang; He, Rao; Xi, Yanni; Zhang, Xuan; Tan, Mengjiao; Huang, Jing; Mo, Dan; Li, Changzhu
2018-07-01
Effect of hydrothermal carbonization (HTC) on the hydrochar pelletization and the aldehydes/ketones emission from pellets during storage was investigated. Pellets made from the hydrochar were stored in sealed apparatuses for sampling. The energy consumption during pelletization and the pellets' properties before/after storage, including dimension, density, moisture content, hardness, aldehyde/ketones emission amount/rate and unsaturated fatty acid amount, were analyzed. Compared with untreated-sawdust-pellets, the hydrochar-pellets required more energy consumption for pelletization, and achieved the improved qualities, resulting in the higher stability degree during storage. The species and amount of unsaturated fatty acids in the hydrochar-pellets were higher than those in the untreated-sawdust-pellets. The unsaturated fatty acids content in the hydrochar-pellets was decreased with increasing HTC temperature. Higher aldehydes/ketones emission amount and rates with a longer emission period were found for the hydrochar-pellets, associated with variations of structure and unsaturated fatty acid composition in pellets. Copyright © 2018 Elsevier Ltd. All rights reserved.
A core handling device for the Mars Sample Return Mission
NASA Technical Reports Server (NTRS)
Gwynne, Owen
1989-01-01
A core handling device for use on Mars is being designed. To provide a context for the design study, it was assumed that a Mars Rover/Sample Return (MRSR) Mission would have the following characteristics: a year or more in length; visits by the rover to 50 or more sites; 100 or more meter-long cores being drilled by the rover; and the capability of returning about 5 kg of Mars regolith to Earth. These characteristics lead to the belief that in order to bring back a variegated set of samples that can address the range of scientific objetives for a MRSR mission to Mars there needs to be considerable analysis done on board the rover. Furthermore, the discrepancy between the amount of sample gathered and the amount to be returned suggests that there needs to be some method of choosing the optimal set of samples. This type of analysis will require pristine material-unaltered by the drilling process. Since the core drill thermally and mechanically alters the outer diameter (about 10 pct) of the core sample, this outer area cannot be used. The primary function of the core handling device is to extract subsamples from the core and to position these subsamples, and the core itself if needed, with respect to the various analytical instruments that can be used to perform these analyses.
Lan, Siang-Wen; Weng, Min-Hang; Yang, Ru-Yuan; Chang, Shoou-Jinn; Chung, Yaoh-Sien; Yu, Tsung-Chih; Wu, Chun-Sen
2016-01-01
In this paper, the oil-in-gelatin based tissue-mimicking materials (TMMs) doped with carbon based materials including carbon nanotube, graphene ink or lignin were prepared. The volume percent for gelatin based mixtures and oil based mixtures were both around 50%, and the doping amounts were 2 wt %, 4 wt %, and 6 wt %. The effect of doping material and amount on the microwave dielectric properties including dielectric constant and conductivity were investigated over an ultra-wide frequency range from 2 GHz to 20 GHz. The coaxial open-ended reflection technology was used to evaluate the microwave dielectric properties. Six measured values in different locations of each sample were averaged and the standard deviations of all the measured dielectric properties, including dielectric constant and conductivity, were less than one, indicating a good uniformity of the prepared samples. Without doping, the dielectric constant was equal to 23 ± 2 approximately. Results showed with doping of carbon based materials that the dielectric constant and conductivity both increased about 5% to 20%, and the increment was dependent on the doping amount. By proper selection of doping amount of the carbon based materials, the prepared material could map the required dielectric properties of special tissues. The proposed materials were suitable for the phantom used in the microwave medical imaging system. PMID:28773678
Degeling, Chris; Burton, Lindsay; McCormack, Gavin R
2012-07-01
Risk factors associated with canine obesity include the amount of walking a dog receives. The aim of this study was to investigate the relationships between canine exercise requirements, socio-demographic factors, and dog-walking behaviors in winter in Calgary. Dog owners, from a cross-sectional study which included a random sample of adults, were asked their household income, domicile type, gender, age, education level, number and breed(s) of dog(s) owned, and frequency and time spent dog-walking in a usual week. Canine exercise requirements were found to be significantly (P < 0.05) positively associated with the minutes pet dogs were walked, as was the owner being a female. Moreover, dog walking frequency, but not minutes of dog walking, was significantly associated with residing in attached housing (i.e., apartments). Different types of dogs have different exercise requirements to maintain optimal health. Understanding the role of socio-demographic factors and dog-related characteristics such as exercise requirements on dog-walking behaviors is essential for helping veterinarians and owners develop effective strategies to prevent and manage canine obesity. Furthermore, encouraging regular dog-walking has the potential to improve the health of pet dogs, and that of their owners.
Kirkwood, Jay S; Maier, Claudia; Stevens, Jan F
2013-05-01
At its most ambitious, untargeted metabolomics aims to characterize and quantify all of the metabolites in a given system. Metabolites are often present at a broad range of concentrations and possess diverse physical properties complicating this task. Performing multiple sample extractions, concentrating sample extracts, and using several separation and detection methods are common strategies to overcome these challenges but require a great amount of resources. This protocol describes the untargeted, metabolic profiling of polar and nonpolar metabolites with a single extraction and using a single analytical platform. © 2013 by John Wiley & Sons, Inc.
Molecular epidemiology biomarkers-Sample collection and processing considerations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holland, Nina T.; Pfleger, Laura; Berger, Eileen
2005-08-07
Biomarker studies require processing and storage of numerous biological samples with the goals of obtaining a large amount of information and minimizing future research costs. An efficient study design includes provisions for processing of the original samples, such as cryopreservation, DNA isolation, and preparation of specimens for exposure assessment. Use of standard, two-dimensional and nanobarcodes and customized electronic databases assure efficient management of large sample collections and tracking results of data analyses. Standard operating procedures and quality control plans help to protect sample quality and to assure validity of the biomarker data. Specific state, federal and international regulations are inmore » place regarding research with human samples, governing areas including custody, safety of handling, and transport of human samples. Appropriate informed consent must be obtained from the study subjects prior to sample collection and confidentiality of results maintained. Finally, examples of three biorepositories of different scale (European Cancer Study, National Cancer Institute and School of Public Health Biorepository, University of California, Berkeley) are used to illustrate challenges faced by investigators and the ways to overcome them. New software and biorepository technologies are being developed by many companies that will help to bring biological banking to a new level required by molecular epidemiology of the 21st century.« less
The simulation of the LANFOS-H food radiation contamination detector using Geant4 package
NASA Astrophysics Data System (ADS)
Piotrowski, Lech Wiktor; Casolino, Marco; Ebisuzaki, Toshikazu; Higashide, Kazuhiro
2015-02-01
Recent incident in the Fukushima power plant caused a growing concern about the radiation contamination and resulted in lowering the Japanese limits for the permitted amount of 137Cs in food to 100 Bq/kg. To increase safety and ease the concern we are developing LANFOS (Large Food Non-destructive Area Sampler)-a compact, easy to use detector for assessment of radiation in food. Described in this paper LANFOS-H has a 4 π coverage to assess the amount of 137Cs present, separating it from the possible 40K food contamination. Therefore, food samples do not have to be pre-processed prior to a test and can be consumed after measurements. It is designed for use by non-professionals in homes and small institutions such as schools, showing safety of the samples, but can be also utilized by specialists providing radiation spectrum. Proper assessment of radiation in food in the apparatus requires estimation of the γ conversion factor of the detectors-how many γ photons will produce a signal. In this paper we show results of the Monte Carlo estimation of this factor for various approximated shapes of fish, vegetables and amounts of rice, performed with Geant4 package. We find that the conversion factor combined from all the detectors is similar for all food types and is around 37%, varying maximally by 5% with sample length, much less than for individual detectors. The different inclinations and positions of samples in the detector introduce uncertainty of 1.4%. This small uncertainty validates the concept of a 4 π non-destructive apparatus.
Massilia sp. BS-1, a novel violacein-producing bacterium isolated from soil.
Agematu, Hitosi; Suzuki, Kazuya; Tsuya, Hiroaki
2011-01-01
A novel bacterium, Massilia sp. BS-1, producing violacein and deoxyviolacein was isolated from a soil sample collected from Akita Prefecture, Japan. The 16S ribosomal DNA of strain BS-1 displayed 93% homology with its nearest violacein-producing neighbor, Janthinobacterium lividum. Strain BS-1 grew well in a synthetic medium, but required both L-tryptophan and a small amount of L-histidine to produce violacein.
Aloisio, Michelangelo; Bortot, Barbara; Gandin, Ilaria; Severini, Giovanni Maria; Athanasakis, Emmanouil
2017-02-01
Chimerism status evaluation of post-allogeneic hematopoietic stem cell transplantation samples is essential to predict post-transplant relapse. The most commonly used technique capable of detecting small increments of chimerism is quantitative real-time PCR. Although this method is already used in several laboratories, previously described protocols often lack sensitivity and the amount of the DNA required for each chimerism analysis is too high. In the present study, we compared a novel semi-nested allele-specific real-time PCR (sNAS-qPCR) protocol with our in-house standard allele-specific real-time PCR (gAS-qPCR) protocol. We selected two genetic markers and analyzed technical parameters (slope, y-intercept, R2, and standard deviation) useful to determine the performances of the two protocols. The sNAS-qPCR protocol showed better sensitivity and precision. Moreover, the sNAS-qPCR protocol requires, as input, only 10 ng of DNA, which is at least 10-fold less than the gAS-qPCR protocols described in the literature. Finally, the proposed sNAS-qPCR protocol could prove very useful for performing chimerism analysis with a small amount of DNA, as in the case of blood cell subsets.
Chromatin Immunoprecipitation (ChIP) Protocol for Low-abundance Embryonic Samples.
Rehimi, Rizwan; Bartusel, Michaela; Solinas, Francesca; Altmüller, Janine; Rada-Iglesias, Alvaro
2017-08-29
Chromatin immunoprecipitation (ChIP) is a widely-used technique for mapping the localization of post-translationally modified histones, histone variants, transcription factors, or chromatin-modifying enzymes at a given locus or on a genome-wide scale. The combination of ChIP assays with next-generation sequencing (i.e., ChIP-Seq) is a powerful approach to globally uncover gene regulatory networks and to improve the functional annotation of genomes, especially of non-coding regulatory sequences. ChIP protocols normally require large amounts of cellular material, thus precluding the applicability of this method to investigating rare cell types or small tissue biopsies. In order to make the ChIP assay compatible with the amount of biological material that can typically be obtained in vivo during early vertebrate embryogenesis, we describe here a simplified ChIP protocol in which the number of steps required to complete the assay were reduced to minimize sample loss. This ChIP protocol has been successfully used to investigate different histone modifications in various embryonic chicken and adult mouse tissues using low to medium cell numbers (5 x 10 4 - 5 x 10 5 cells). Importantly, this protocol is compatible with ChIP-seq technology using standard library preparation methods, thus providing global epigenomic maps in highly relevant embryonic tissues.
He, Guoai; Tan, Liming; Liu, Feng; Huang, Lan; Huang, Zaiwang; Jiang, Liang
2017-01-01
Controlling grain size in polycrystalline nickel base superalloy is vital for obtaining required mechanical properties. Typically, a uniform and fine grain size is required throughout forging process to realize the superplastic deformation. Strain amount occupied a dominant position in manipulating the dynamic recrystallization (DRX) process and regulating the grain size of the alloy during hot forging. In this article, the high-throughput double cone specimen was introduced to yield wide-range strain in a single sample. Continuous variations of effective strain ranging from 0.23 to 1.65 across the whole sample were achieved after reaching a height reduction of 70%. Grain size is measured to be decreased from the edge to the center of specimen with increase of effective strain. Small misorientation tended to generate near the grain boundaries, which was manifested as piled-up dislocation in micromechanics. After the dislocation density reached a critical value, DRX progress would be initiated at higher deformation region, leading to the refinement of grain size. During this process, the transformations from low angle grain boundaries (LAGBs) to high angle grain boundaries (HAGBs) and from subgrains to DRX grains are found to occur. After the accomplishment of DRX progress, the neonatal grains are presented as having similar orientation inside the grain boundary. PMID:28772514
Triaxial testing of Lopez Fault gouge at 150 MPa mean effective stress
Scott, D.R.; Lockner, D.A.; Byerlee, J.D.; Sammis, C.G.
1994-01-01
Triaxial compression experiments were performed on samples of natural granular fault gouge from the Lopez Fault in Southern California. This material consists primarily of quartz and has a self-similar grain size distribution thought to result from natural cataclasis. The experiments were performed at a constant mean effective stress of 150 MPa, to expose the volumetric strains associated with shear failure. The failure strength is parameterized by the coefficient of internal friction ??, based on the Mohr-Coulomb failure criterion. Samples of remoulded Lopez gouge have internal friction ??=0.6??0.02. In experiments where the ends of the sample are constrained to remain axially aligned, suppressing strain localisation, the sample compacts before failure and dilates persistently after failure. In experiments where one end of the sample is free to move laterally, the strain localises to a single oblique fault at around the point of failure; some dilation occurs but does not persist. A comparison of these experiments suggests that dilation is confined to the region of shear localisation in a sample. Overconsolidated samples have slightly larger failure strengths than normally consolidated samples, and smaller axial strains are required to cause failure. A large amount of dilation occurs after failure in heavily overconsolidated samples, suggesting that dilation is occurring throughout the sample. Undisturbed samples of Lopez gouge, cored from the outcrop, have internal friction in the range ??=0.4-0.6; the upper end of this range corresponds to the value established for remoulded Lopez gouge. Some kind of natural heterogeneity within the undisturbed samples is probably responsible for their low, variable strength. In samples of simulated gouge, with a more uniform grain size, active cataclasis during axial loading leads to large amounts of compaction. Larger axial strains are required to cause failure in simulated gouge, but the failure strength is similar to that of natural Lopez gouge. Use of the Mohr-Coulomb failure criterion to interpret the results from this study, and other recent studies on intact rock and granular gouge, leads to values of ?? that depend on the loading configuration and the intact or granular state of the sample. Conceptual models are advanced to account for these descrepancies. The consequences for strain-weakening of natural faults are also discussed. ?? 1994 Birkha??user Verlag.
Determination of trace metals in spirits by total reflection X-ray fluorescence spectrometry
NASA Astrophysics Data System (ADS)
Siviero, G.; Cinosi, A.; Monticelli, D.; Seralessandri, L.
2018-06-01
Eight spirituous samples were analyzed for trace metal content with Horizon Total Reflection X-Ray Fluorescence (TXRF) Spectrometer. The expected single metal amount is at the ng/g level in a mixed aqueous/organic matrix, thus requiring a sample preparation method capable of achieving suitable limits of detection. On-site enrichment and Atmospheric Pressure-Vapor Phase Decomposition allowed to detect Cr, Mn, Fe, Co, Ni, Cu, Zn, As, Sr and Pb with detection limits ranging from 0.1 ng/g to 4.6 ng/g. These results highlight how the synergy between instrument and sample preparation strategy may foster the use of TXRF as a fast and reliable technique for the determination of trace elements in spirituous samples, either for quality control or risk assessment purposes.
Urakami, K; Saito, Y; Fujiwara, Y; Watanabe, C; Umemoto, K; Godo, M; Hashimoto, K
2000-12-01
Thermal desorption (TD) techniques followed by capillary GC/MS were applied for the analysis of residual solvents in bulk pharmaceuticals. Solvents desorbed from samples by heating were cryofocused at the head of a capillary column prior to GC/MS analysis. This method requires a very small amount of sample and no sample pretreatment. Desorption temperature was set at the point about 20 degrees C higher than the melting point of each sample individually. The relative standard deviations of this method tested by performing six consecutive analyses of 8 different samples were 1.1 to 3.1%, and analytical results of residual solvents were in agreement with those obtained by direct injection of N,N-dimethylformamide solution of the samples into the GC. This novel TD/GC/MS method was demonstrated to be very useful for the identification and quantification of residual solvents in bulk pharmaceuticals.
Microvolume protein concentration determination using the NanoDrop 2000c spectrophotometer.
Desjardins, Philippe; Hansen, Joel B; Allen, Michael
2009-11-04
Traditional spectrophotometry requires placing samples into cuvettes or capillaries. This is often impractical due to the limited sample volumes often used for protein analysis. The Thermo Scientific NanoDrop 2000c Spectrophotometer solves this issue with an innovative sample retention system that holds microvolume samples between two measurement surfaces using the surface tension properties of liquids, enabling the quantification of samples in volumes as low as 0.5-2 microL. The elimination of cuvettes or capillaries allows real time changes in path length, which reduces the measurement time while greatly increasing the dynamic range of protein concentrations that can be measured. The need for dilutions is also eliminated, and preparations for sample quantification are relatively easy as the measurement surfaces can be simply wiped with laboratory wipe. This video article presents modifications to traditional protein concentration determination methods for quantification of microvolume amounts of protein using A280 absorbance readings or the BCA colorimetric assay.
Scene recognition based on integrating active learning with dictionary learning
NASA Astrophysics Data System (ADS)
Wang, Chengxi; Yin, Xueyan; Yang, Lin; Gong, Chengrong; Zheng, Caixia; Yi, Yugen
2018-04-01
Scene recognition is a significant topic in the field of computer vision. Most of the existing scene recognition models require a large amount of labeled training samples to achieve a good performance. However, labeling image manually is a time consuming task and often unrealistic in practice. In order to gain satisfying recognition results when labeled samples are insufficient, this paper proposed a scene recognition algorithm named Integrating Active Learning and Dictionary Leaning (IALDL). IALDL adopts projective dictionary pair learning (DPL) as classifier and introduces active learning mechanism into DPL for improving its performance. When constructing sampling criterion in active learning, IALDL considers both the uncertainty and representativeness as the sampling criteria to effectively select the useful unlabeled samples from a given sample set for expanding the training dataset. Experiment results on three standard databases demonstrate the feasibility and validity of the proposed IALDL.
NASA Astrophysics Data System (ADS)
OBrien, R. E.; Ridley, K. J.; Canagaratna, M. R.; Croteau, P.; Budisulistiorini, S. H.; Cui, T.; Green, H. S.; Surratt, J. D.; Jayne, J. T.; Kroll, J. H.
2016-12-01
A thorough understanding of the sources, evolution, and budgets of atmospheric organic aerosol requires widespread measurements of the amount and chemical composition of atmospheric organic carbon in the condensed phase (within particles and water droplets). Collecting such datasets requires substantial spatial and temporal (long term) coverage, which can be challenging when relying on online measurements by state-of-the-art research-grade instrumentation (such as those used in atmospheric chemistry field studies). Instead, samples are routinely collected using relatively low-cost techniques, such as aerosol filters, for offline analysis of their chemical composition. However, measurements made by online and offline instruments can be fundamentally different, leading to disparities between data from field studies and those from more routine monitoring. To better connect these two approaches, and take advantage of the benefits of each, we have developed a method to introduce collected samples into online aerosol instruments using nebulization. Because nebulizers typically require tens to hundreds of milliliters of solution, limiting this technique to large samples, we developed a new, ultrasonic micro-nebulizer that requires only small volumes (tens of microliters) of sample for chemical analysis. The nebulized (resuspended) sample is then sent into a high-resolution Aerosol Mass Spectrometer (AMS), a widely-used instrument that provides key information on the chemical composition of aerosol particulate matter (elemental ratios, carbon oxidation state, etc.), measurements that are not typically made for collected atmospheric samples. Here, we compare AMS data collected using standard on-line techniques with our offline analysis, demonstrating the utility of this new technique to aerosol filter samples. We then apply this approach to organic aerosol filter samples collected in remote regions, as well as rainwater samples from across the US. This data provides information on the sample composition and changes in key chemical characteristics across locations and seasons.
Vig, Asger Laurberg; Haldrup, Kristoffer; Enevoldsen, Nikolaj; Thilsted, Anil Haraksingh; Eriksen, Johan; Kristensen, Anders; Feidenhans'l, Robert; Nielsen, Martin Meedom
2009-11-01
We propose and describe a microfluidic system for high intensity x-ray measurements. The required open access to a microfluidic channel is provided by an out-of-plane capillary burst valve (CBV). The functionality of the out-of-plane CBV is characterized with respect to the diameter of the windowless access hole, ranging from 10 to 130 microm. Maximum driving pressures from 22 to 280 mbar corresponding to refresh rates of the exposed sample from 300 Hz to 54 kHz is demonstrated. The microfluidic system is tested at beamline ID09b at the ESRF synchrotron radiation facility in Grenoble, and x-ray scattering measurements are shown to be feasible and to require only very limited amounts of sample, <1 ml/h of measurements without recapturing of sample. With small adjustments of the present chip design, scattering angles up to 30 degrees can be achieved without shadowing effects and integration on-chip mixing and spectroscopy appears straightforward.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vig, Asger Laurberg; Enevoldsen, Nikolaj; Thilsted, Anil Haraksingh
2009-11-15
We propose and describe a microfluidic system for high intensity x-ray measurements. The required open access to a microfluidic channel is provided by an out-of-plane capillary burst valve (CBV). The functionality of the out-of-plane CBV is characterized with respect to the diameter of the windowless access hole, ranging from 10 to 130 {mu}m. Maximum driving pressures from 22 to 280 mbar corresponding to refresh rates of the exposed sample from 300 Hz to 54 kHz is demonstrated. The microfluidic system is tested at beamline ID09b at the ESRF synchrotron radiation facility in Grenoble, and x-ray scattering measurements are shown tomore » be feasible and to require only very limited amounts of sample, <1 ml/h of measurements without recapturing of sample. With small adjustments of the present chip design, scattering angles up to 30 deg. can be achieved without shadowing effects and integration on-chip mixing and spectroscopy appears straightforward.« less
Zafra-Gómez, Alberto; Garballo, Antonio; Morales, Juan C; García-Ayuso, Luis E
2006-06-28
A fast, simple, and reliable method for the isolation and determination of the vitamins thiamin, riboflavin, niacin, pantothenic acid, pyridoxine, folic acid, cyanocobalamin, and ascorbic acid in food samples is proposed. The most relevant advantages of the proposed method are the simultaneous determination of the eight more common vitamins in enriched food products and a reduction of the time required for quantitative extraction, because the method consists merely of the addition of a precipitation solution and centrifugation of the sample. Furthermore, this method saves a substantial amount of reagents as compared with official methods, and minimal sample manipulation is achieved due to the few steps required. The chromatographic separation is carried out on a reverse phase C18 column, and the vitamins are detected at different wavelengths by either fluorescence or UV-visible detection. The proposed method was applied to the determination of water-soluble vitamins in supplemented milk, infant nutrition products, and milk powder certified reference material (CRM 421, BCR) with recoveries ranging from 90 to 100%.
Measurement of curium in marine samples
NASA Astrophysics Data System (ADS)
Schneider, D. L.; Livingston, H. D.
1984-06-01
Measurement of environmentally small but detectable amounts of curium requires reliable, accureate, and sensitive analytical methods. The radiochemical separation developed at Woods Hole is briefly reviewed with specific reference to radiochemical interferences in the alpha spectrometric measurement of curium nuclides and to the relative amounts of interferences expected in different oceanic regimes and sample types. Detection limits for 242 Cm and 244 Cm are ultimately limited by their presence in the 243Am used as curium yield monitor. Environmental standard reference materials are evaluated with regard to curium. The marine literature is reviewed and curium measurements are discussed in relation to their source of introduction to the environment. Sources include ocean dumping of low-level radioactive wastes and discharges from nuclear fuel reporcessing activities, In particular, the question of a detectable presence of 244Cm in global fallout from nuclear weapons testing is addressed and shown to be essentially negligible. Analyses of Scottish coastal sedimantes show traces of 242Cm and 244Cm activity which are believed to originate from transport from sources in the Irish Sea.
NASA Astrophysics Data System (ADS)
Kuntamalla, Srinivas; Lekkala, Ram Gopal Reddy
2014-10-01
Heart rate variability (HRV) is an important dynamic variable of the cardiovascular system, which operates on multiple time scales. In this study, Multiscale entropy (MSE) analysis is applied to HRV signals taken from Physiobank to discriminate Congestive Heart Failure (CHF) patients from healthy young and elderly subjects. The discrimination power of the MSE method is decreased as the amount of the data reduces and the lowest amount of the data at which there is a clear discrimination between CHF and normal subjects is found to be 4000 samples. Further, this method failed to discriminate CHF from healthy elderly subjects. In view of this, the Reduced Data Dualscale Entropy Analysis method is proposed to reduce the data size required (as low as 500 samples) for clearly discriminating the CHF patients from young and elderly subjects with only two scales. Further, an easy to interpret index is derived using this new approach for the diagnosis of CHF. This index shows 100 % accuracy and correlates well with the pathophysiology of heart failure.
Rodriguez-Iruretagoiena, A; Rementeria, A; Zaldibar, B; de Vallejuelo, S Fdez-Ortiz; Gredilla, A; Arana, G; de Diego, A
2016-10-15
The effects exerted by metals in oysters are still a matter of debate and require more detailed studies. In this work we have investigated whether the health status of oysters are affected by the amount of metals present in the sediments of their habitat. Sediments and oysters were collected in the tidal part of the estuary of the Oka River (Basque Country), representative of other mesotidal, well mixed and short estuaries of the European Atlantic coast. The concentrations of 14 elements were determined in all the samples. Several biomarkers were also measured in the soft tissues of oysters. According to the concentrations found, the sediments were classified as non-toxic or slightly toxic. In good agreement, the histological alterations observed in oysters were not severe. Interestingly, in those sampling sites where the sediments showed relatively high metal concentrations, the metallic content in oysters was lower, and vice versa. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Razzaqi, A.; Liaghat, Gh.; Razmkhah, O.
2017-10-01
In this paper, mechanical properties of Aluminum (Al) matrix nano-composites, fabricated by Powder Metallurgy (PM) method, has been investigated. Alumina (Al2O3) nano particles were added in amounts of 0, 2.5, 5, 7.5 and 10 weight percentages (wt%). For this purpose, Al powder (particle size: 20 µm) and nano-Al2O3 (particle size: 20 nm) in various weight percentages were mixed and milled in a blade mixer for 15 minutes in 1500 rpm. Then, the obtained mixture, compacted by means of a two piece die and uniaxial cold press of about 600 MPa and cold iso-static press (CIP), required for different tests. After that, the samples sintered in 600°C for 90 minutes. Compression and three-point bending tests performed on samples and the results, led us to obtain the optimized particle size for achieving best mechanical properties.
A new approach to counting measurements: Addressing the problems with ISO-11929
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klumpp, John Allan; Poudel, Deepesh; Miller, Guthrie
We present an alternative approach to making counting measurements of radioactivity which offers probabilistic interpretations of the measurements. Unlike the approach in the current international standard (ISO-11929), our approach, which uses an assumed prior probability distribution of the true amount in the sample, is able to answer the question of interest for most users of the standard: “what is the probability distribution of the true amount in the sample, given the data?” The final interpretation of the measurement requires information not necessarily available at the measurement stage. However, we provide an analytical formula for what we term the “measurement strength”more » that depends only on measurement-stage count quantities. Here, we show that, when the sources are rare, the posterior odds that the sample true value exceeds ε are the measurement strength times the prior odds, independently of ε, the prior odds, and the distribution of the calibration coefficient. We recommend that the measurement lab immediately follow-up on unusually high samples using an “action threshold” on the measurement strength which is similar to the decision threshold recommended by the current standard. Finally, we further recommend that the measurement lab perform large background studies in order to characterize non constancy of background, including possible time correlation of background.« less
A new approach to counting measurements: Addressing the problems with ISO-11929
Klumpp, John Allan; Poudel, Deepesh; Miller, Guthrie
2017-12-23
We present an alternative approach to making counting measurements of radioactivity which offers probabilistic interpretations of the measurements. Unlike the approach in the current international standard (ISO-11929), our approach, which uses an assumed prior probability distribution of the true amount in the sample, is able to answer the question of interest for most users of the standard: “what is the probability distribution of the true amount in the sample, given the data?” The final interpretation of the measurement requires information not necessarily available at the measurement stage. However, we provide an analytical formula for what we term the “measurement strength”more » that depends only on measurement-stage count quantities. Here, we show that, when the sources are rare, the posterior odds that the sample true value exceeds ε are the measurement strength times the prior odds, independently of ε, the prior odds, and the distribution of the calibration coefficient. We recommend that the measurement lab immediately follow-up on unusually high samples using an “action threshold” on the measurement strength which is similar to the decision threshold recommended by the current standard. Finally, we further recommend that the measurement lab perform large background studies in order to characterize non constancy of background, including possible time correlation of background.« less
A new approach to counting measurements: Addressing the problems with ISO-11929
NASA Astrophysics Data System (ADS)
Klumpp, John; Miller, Guthrie; Poudel, Deepesh
2018-06-01
We present an alternative approach to making counting measurements of radioactivity which offers probabilistic interpretations of the measurements. Unlike the approach in the current international standard (ISO-11929), our approach, which uses an assumed prior probability distribution of the true amount in the sample, is able to answer the question of interest for most users of the standard: "what is the probability distribution of the true amount in the sample, given the data?" The final interpretation of the measurement requires information not necessarily available at the measurement stage. However, we provide an analytical formula for what we term the "measurement strength" that depends only on measurement-stage count quantities. We show that, when the sources are rare, the posterior odds that the sample true value exceeds ε are the measurement strength times the prior odds, independently of ε, the prior odds, and the distribution of the calibration coefficient. We recommend that the measurement lab immediately follow-up on unusually high samples using an "action threshold" on the measurement strength which is similar to the decision threshold recommended by the current standard. We further recommend that the measurement lab perform large background studies in order to characterize non constancy of background, including possible time correlation of background.
Designing single- and multiple-shell sampling schemes for diffusion MRI using spherical code.
Cheng, Jian; Shen, Dinggang; Yap, Pew-Thian
2014-01-01
In diffusion MRI (dMRI), determining an appropriate sampling scheme is crucial for acquiring the maximal amount of information for data reconstruction and analysis using the minimal amount of time. For single-shell acquisition, uniform sampling without directional preference is usually favored. To achieve this, a commonly used approach is the Electrostatic Energy Minimization (EEM) method introduced in dMRI by Jones et al. However, the electrostatic energy formulation in EEM is not directly related to the goal of optimal sampling-scheme design, i.e., achieving large angular separation between sampling points. A mathematically more natural approach is to consider the Spherical Code (SC) formulation, which aims to achieve uniform sampling by maximizing the minimal angular difference between sampling points on the unit sphere. Although SC is well studied in the mathematical literature, its current formulation is limited to a single shell and is not applicable to multiple shells. Moreover, SC, or more precisely continuous SC (CSC), currently can only be applied on the continuous unit sphere and hence cannot be used in situations where one or several subsets of sampling points need to be determined from an existing sampling scheme. In this case, discrete SC (DSC) is required. In this paper, we propose novel DSC and CSC methods for designing uniform single-/multi-shell sampling schemes. The DSC and CSC formulations are solved respectively by Mixed Integer Linear Programming (MILP) and a gradient descent approach. A fast greedy incremental solution is also provided for both DSC and CSC. To our knowledge, this is the first work to use SC formulation for designing sampling schemes in dMRI. Experimental results indicate that our methods obtain larger angular separation and better rotational invariance than the generalized EEM (gEEM) method currently used in the Human Connectome Project (HCP).
Real-time specific surface area measurements via laser-induced breakdown spectroscopy
Washburn, Kathryn E.; Birdwell, Justin E.; Howard, James E.
2017-01-01
From healthcare to cosmetics to environmental science, the specific surface area (SSA) of micro- and mesoporous materials or products can greatly affect their chemical and physical properties. SSA results are also widely used to examine source rocks in conventional and unconventional petroleum resource plays. Despite its importance, current methods to measure SSA are often cumbersome, time-consuming, or require cryogenic consumables (e.g., liquid nitrogen). These methods are not amenable to high-throughput environments, have stringent sample preparation requirements, and are not practical for use in the field. We present a new application of laser-induced breakdown spectroscopy for rapid measurement of SSA. This study evaluates geological samples, specifically organic-rich oil shales, but the approach is expected to be applicable to many other types of materials. The method uses optical emission spectroscopy to examine laser-generated plasma and quantify the amount of argon adsorbed to a sample during an inert gas purge. The technique can accommodate a wide range of sample sizes and geometries and has the potential for field use. These advantages for SSA measurement combined with the simultaneous acquisition of composition information make this a promising new approach for characterizing geologic samples and other materials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Liang; Nachtergaele, Sigrid; Seddon, Annela M.
This paper utilizes cyclodextrin-based host-guest chemistry in a microfluidic device to modulate the crystallization of membrane proteins and the process of concentration of membrane protein samples. Methyl-{beta}-cyclodextrin (MBCD) can efficiently capture a wide variety of detergents commonly used for the stabilization of membrane proteins by sequestering detergent monomers. Reaction Center (RC) from Blastochloris viridis was used here as a model system. In the process of concentrating membrane protein samples, MBCD was shown to break up free detergent micelles and prevent them from being concentrated. The addition of an optimal amount of MBCD to the RC sample captured loosely bound detergentmore » from the protein-detergent complex and improved sample homogeneity, as characterized by dynamic light scattering. Using plug-based microfluidics, RC crystals were grown in the presence of MBCD, giving a different morphology and space group than crystals grown without MBCD. The crystal structure of RC crystallized in the presence of MBCD was consistent with the changes in packing and crystal contacts hypothesized for removal of loosely bound detergent. The incorporation of MBCD into a plug-based microfluidic crystallization method allows efficient use of limited membrane protein sample by reducing the amount of protein required and combining sparse matrix screening and optimization in one experiment. The use of MBCD for detergent capture can be expanded to develop cyclodextrin-derived molecules for fine-tuned detergent capture and thus modulate membrane protein crystallization in an even more controllable way.« less
Deuterium Retention and Physical Sputtering of Low Activation Ferritic Steel
NASA Astrophysics Data System (ADS)
T, Hino; K, Yamaguchi; Y, Yamauchi; Y, Hirohata; K, Tsuzuki; Y, Kusama
2005-04-01
Low activation materials have to be developed toward fusion demonstration reactors. Ferritic steel, vanadium alloy and SiC/SiC composite are candidate materials of the first wall, vacuum vessel and blanket components, respectively. Although changes of mechanical-thermal properties owing to neutron irradiation have been investigated so far, there is little data for the plasma material interactions, such as fuel hydrogen retention and erosion. In the present study, deuterium retention and physical sputtering of low activation ferritic steel, F82H, were investigated by using deuterium ion irradiation apparatus. After a ferritic steel sample was irradiated by 1.7 keV D+ ions, the weight loss was measured to obtain the physical sputtering yield. The sputtering yield was 0.04, comparable to that of stainless steel. In order to obtain the retained amount of deuterium, technique of thermal desorption spectroscopy (TDS) was employed to the irradiated sample. The retained deuterium desorbed at temperature ranging from 450 K to 700 K, in the forms of DHO, D2, D2O and hydrocarbons. Hence, the deuterium retained can be reduced by baking with a relatively low temperature. The fluence dependence of retained amount of deuterium was measured by changing the ion fluence. In the ferritic steel without mechanical polish, the retained amount was large even when the fluence was low. In such a case, a large amount of deuterium was trapped in the surface oxide layer containing O and C. When the fluence was large, the thickness of surface oxide layer was reduced by the ion sputtering, and then the retained amount in the oxide layer decreased. In the case of a high fluence, the retained amount of deuterium became comparable to that of ferritic steel with mechanical polish or SS 316L, and one order of magnitude smaller than that of graphite. When the ferritic steel is used, it is required to remove the surface oxide layer for reduction of fuel hydrogen retention. Ferritic steel sample was exposed to the environment of JFT-2M tokamak in JAERI and after that the deuterium retention was examined. The result was roughly the same as the case of deuterium ion irradiation experiment.
A Rapid and Accurate Extraction Procedure for Analysing Free Amino Acids in Meat Samples by GC-MS
Barroso, Miguel A.; Ruiz, Jorge; Antequera, Teresa
2015-01-01
This study evaluated the use of a mixer mill as the homogenization tool for the extraction of free amino acids in meat samples, with the main goal of analyzing a large number of samples in the shortest time and minimizing sample amount and solvent volume. Ground samples (0.2 g) were mixed with 1.5 mL HCl 0.1 M and homogenized in the mixer mill. The final biphasic system was separated by centrifugation. The supernatant was deproteinized, derivatized and analyzed by gas chromatography. This procedure showed a high extracting ability, especially in samples with high free amino acid content (recovery = 88.73–104.94%). It also showed a low limit of detection and quantification (3.8 · 10−4–6.6 · 10−4 μg μL−1 and 1.3 · 10−3–2.2 · 10−2 μg μL−1, resp.) for most amino acids, an adequate precision (2.15–20.15% for run-to-run), and a linear response for all amino acids (R 2 = 0.741–0.998) in the range of 1–100 µg mL−1. Moreover, it takes less time and requires lower amount of sample and solvent than conventional techniques. Thus, this is a cost and time efficient tool for homogenizing in the extraction procedure of free amino acids from meat samples, being an adequate option for routine analysis. PMID:25873963
Using multi-spectral landsat imagery to examine forest health trends at Fort Benning, Georgia
Shawna L. Reid; Joan L. Walker; Abigail Schaaf
2016-01-01
Assessing vegetation health attributes like canopy density or live crown ratio and ecological processes such as growth or succession ultimately requires direct measures of plant communities. However, on-theground sampling is labor and time intensive, effectively limiting the amount of forest that can be evaluated. Radiometric data collected with a variety of sensors ...
Test Results for Caustic Demand Measurements on Tank 241-AX-101 and Tank 241-AX-103 Archive Samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doll, Stephanie R.; Bolling, Stacie D.
Caustic demand testing is used to determine the necessary amount of caustic required to neutralize species present in the Hanford tank waste and obtain a target molarity of free hydroxide for tank corrosion control. The presence and quantity of hydroxide-consuming analytes are just as important in determining the caustic demand as is the amount of free hydroxide present. No single data point can accurately predict whether a satisfactory hydroxide level is being met, as it is dependent on multiple factors (e.g., free hydroxide, buffers, amphoteric metal hydroxides, bicarbonate, etc.). This enclosure contains the caustic demand, scanning electron microscopy (SEM), polarizedmore » light microscopy (PLM), and X-ray diffraction (XRD) analysis for the tank 241-AX-101 (AX-101) and 241-AX-103 (AX-103) samples. The work was completed to fulfill a customer request outlined in the test plan, WRPS-1505529, “Test Plan and Procedure for Caustic Demand Testing on Tank 241-AX-101 and Tank 241-AX-103 Archive Samples.” The work results will provide a baseline to support planned retrieval of AX-101 and AX-103.« less
An intelligent load shedding scheme using neural networks and neuro-fuzzy.
Haidar, Ahmed M A; Mohamed, Azah; Al-Dabbagh, Majid; Hussain, Aini; Masoum, Mohammad
2009-12-01
Load shedding is some of the essential requirement for maintaining security of modern power systems, particularly in competitive energy markets. This paper proposes an intelligent scheme for fast and accurate load shedding using neural networks for predicting the possible loss of load at the early stage and neuro-fuzzy for determining the amount of load shed in order to avoid a cascading outage. A large scale electrical power system has been considered to validate the performance of the proposed technique in determining the amount of load shed. The proposed techniques can provide tools for improving the reliability and continuity of power supply. This was confirmed by the results obtained in this research of which sample results are given in this paper.
Fast Ordered Sampling of DNA Sequence Variants.
Greenberg, Anthony J
2018-05-04
Explosive growth in the amount of genomic data is matched by increasing power of consumer-grade computers. Even applications that require powerful servers can be quickly tested on desktop or laptop machines if we can generate representative samples from large data sets. I describe a fast and memory-efficient implementation of an on-line sampling method developed for tape drives 30 years ago. Focusing on genotype files, I test the performance of this technique on modern solid-state and spinning hard drives, and show that it performs well compared to a simple sampling scheme. I illustrate its utility by developing a method to quickly estimate genome-wide patterns of linkage disequilibrium (LD) decay with distance. I provide open-source software that samples loci from several variant format files, a separate program that performs LD decay estimates, and a C++ library that lets developers incorporate these methods into their own projects. Copyright © 2018 Greenberg.
High-speed fixed-target serial virus crystallography
Roedig, Philip; Ginn, Helen M.; Pakendorf, Tim; ...
2017-06-19
Here, we report a method for serial X-ray crystallography at X-ray free-electron lasers (XFELs), which allows for full use of the current 120-Hz repetition rate of the Linear Coherent Light Source (LCLS). Using a micropatterned silicon chip in combination with the high-speed Roadrunner goniometer for sample delivery, we were able to determine the crystal structures of the picornavirus bovine enterovirus 2 (BEV2) and the cytoplasmic polyhedrosis virus type 18 polyhedrin, with total data collection times of less than 14 and 10 min, respectively. Our method requires only micrograms of sample and should therefore broaden the applicability of serial femtosecond crystallographymore » to challenging projects for which only limited sample amounts are available. By synchronizing the sample exchange to the XFEL repetition rate, our method allows for most efficient use of the limited beam time available at XFELs and should enable a substantial increase in sample throughput at these facilities.« less
High-speed fixed-target serial virus crystallography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roedig, Philip; Ginn, Helen M.; Pakendorf, Tim
Here, we report a method for serial X-ray crystallography at X-ray free-electron lasers (XFELs), which allows for full use of the current 120-Hz repetition rate of the Linear Coherent Light Source (LCLS). Using a micropatterned silicon chip in combination with the high-speed Roadrunner goniometer for sample delivery, we were able to determine the crystal structures of the picornavirus bovine enterovirus 2 (BEV2) and the cytoplasmic polyhedrosis virus type 18 polyhedrin, with total data collection times of less than 14 and 10 min, respectively. Our method requires only micrograms of sample and should therefore broaden the applicability of serial femtosecond crystallographymore » to challenging projects for which only limited sample amounts are available. By synchronizing the sample exchange to the XFEL repetition rate, our method allows for most efficient use of the limited beam time available at XFELs and should enable a substantial increase in sample throughput at these facilities.« less
High-speed fixed-target serial virus crystallography
Roedig, Philip; Ginn, Helen M.; Pakendorf, Tim; Sutton, Geoff; Harlos, Karl; Walter, Thomas S.; Meyer, Jan; Fischer, Pontus; Duman, Ramona; Vartiainen, Ismo; Reime, Bernd; Warmer, Martin; Brewster, Aaron S.; Young, Iris D.; Michels-Clark, Tara; Sauter, Nicholas K.; Kotecha, Abhay; Kelly, James; Rowlands, David J.; Sikorsky, Marcin; Nelson, Silke; Damiani, Daniel S.; Alonso-Mori, Roberto; Ren, Jingshan; Fry, Elizabeth E.; David, Christian; Stuart, David I.; Wagner, Armin; Meents, Alke
2017-01-01
We report a method for serial X-ray crystallography at X-ray free electron lasers (XFELs), which allows for full use of the current 120 Hz repetition rate of the Linear Coherent Light Source (LCLS). Using a micro-patterned silicon chip in combination with the high-speed Roadrunner goniometer for sample delivery we were able to determine the crystal structures of a picornavirus, bovine enterovirus 2 (BEV2), and the cytoplasmic polyhedrosis virus type 18 polyhedrin. Total data collection times were less than 14 and 10 minutes, respectively. Our method requires only micrograms of sample and will therefore broaden the applicability of serial femtosecond crystallography to challenging projects for which only limited sample amounts are available. By synchronizing the sample exchange to the XFEL repetition rate, our method allows for the most efficient use of the limited beamtime available at XFELs and should enable a substantial increase in sample throughput at these facilities. PMID:28628129
Representation of complex probabilities and complex Gibbs sampling
NASA Astrophysics Data System (ADS)
Salcedo, Lorenzo Luis
2018-03-01
Complex weights appear in Physics which are beyond a straightforward importance sampling treatment, as required in Monte Carlo calculations. This is the wellknown sign problem. The complex Langevin approach amounts to effectively construct a positive distribution on the complexified manifold reproducing the expectation values of the observables through their analytical extension. Here we discuss the direct construction of such positive distributions paying attention to their localization on the complexified manifold. Explicit localized representations are obtained for complex probabilities defined on Abelian and non Abelian groups. The viability and performance of a complex version of the heat bath method, based on such representations, is analyzed.
Huang, Yu-San; Liu, Ju-Tsung; Lin, Li-Chang; Lin, Cheng-Huang
2003-03-01
The R-(-)- and S-(+)-isomers of 3,4-methylenedioxymethamphetamine (MDMA) and its metabolite 3,4-methylenedioxyamphetamine (MDA) were prepared, identified by gas chromatography/mass spectrometry (GC/MS) and then used as standards in a series of capillary electrophoresis (CE) experiments. Using these R-(-)- and S-(+)-isomers, the distribution of (RS)-MDA and (RS)-MDMA stereoisomers in clandestine tablets and suspect urine samples were identified. Several electrophoretic parameters, such as the concentration of beta-cyclodextrin used in the electrophoretic separation and the amount of organic solvents required for the separation, were optimized.
Choe, Sanggil; Lee, Jaesin; Choi, Hyeyoung; Park, Yujin; Lee, Heesang; Pyo, Jaesung; Jo, Jiyeong; Park, Yonghoon; Choi, Hwakyung; Kim, Suncheun
2012-11-30
The information about the sources of supply, trafficking routes, distribution patterns and conspiracy links can be obtained from methamphetamine profiling. The precursor and synthetic method for the clandestine manufacture can be estimated from the analysis of minor impurities contained in methamphetamine. Also, the similarity between samples can be evaluated using the peaks that appear in chromatograms. In South Korea, methamphetamine was the most popular drug but the total seized amount of methamphetamine whole through the country was very small. Therefore, it would be more important to find the links between samples than the other uses of methamphetamine profiling. Many Asian countries including Japan and South Korea have been using the method developed by National Research Institute of Police Science of Japan. The method used gas chromatography-flame ionization detector (GC-FID), DB-5 column and four internal standards. It was developed to increase the amount of impurities and minimize the amount of methamphetamine. After GC-FID analysis, the raw data have to be processed. The data processing steps are very complex and require a lot of time and effort. In this study, Microsoft Visual Basic Application (VBA) modules were developed to handle these data processing steps. This module collected the results from the data into an Excel file and then corrected the retention time shift and response deviation generated from the sample preparation and instruments analysis. The developed modules were tested for their performance using 10 samples from 5 different cases. The processed results were analyzed with Pearson correlation coefficient for similarity assessment and the correlation coefficient of the two samples from the same case was more than 0.99. When the modules were applied to 131 seized methamphetamine samples, four samples from two different cases were found to have the common origin and the chromatograms of the four samples were appeared visually identical. The developed VBA modules could process raw data of GC-FID very quickly and easily. Also, they could assess the similarity between samples by peak pattern recognition using whole peaks without spectral identification of each peak that appeared in the chromatogram. The results collectively suggest that the modules would be useful tools to augment similarity assessment between seized methamphetamine samples. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Simple algorithms for digital pulse-shape discrimination with liquid scintillation detectors
NASA Astrophysics Data System (ADS)
Alharbi, T.
2015-01-01
The development of compact, battery-powered digital liquid scintillation neutron detection systems for field applications requires digital pulse processing (DPP) algorithms with minimum computational overhead. To meet this demand, two DPP algorithms for the discrimination of neutron and γ-rays with liquid scintillation detectors were developed and examined by using a NE213 liquid scintillation detector in a mixed radiation field. The first algorithm is based on the relation between the amplitude of a current pulse at the output of a photomultiplier tube and the amount of charge contained in the pulse. A figure-of-merit (FOM) value of 0.98 with 450 keVee (electron equivalent energy) energy threshold was achieved with this method when pulses were sampled at 250 MSample/s and with 8-bit resolution. Compared to the similar method of charge-comparison this method requires only a single integration window, thereby reducing the amount of computations by approximately 40%. The second approach is a digital version of the trailing-edge constant-fraction discrimination method. A FOM value of 0.84 with an energy threshold of 450 keVee was achieved with this method. In comparison with the similar method of rise-time discrimination this method requires a single time pick-off, thereby reducing the amount of computations by approximately 50%. The algorithms described in this work are useful for developing portable detection systems for applications such as homeland security, radiation dosimetry and environmental monitoring.
NASA Astrophysics Data System (ADS)
Marcó P., L. M.; Jiménez, E.; Hernández C., E. A.; Rojas, A.; Greaves, E. D.
2001-11-01
The method of quantification using the Compton peak as an internal standard, developed in a previous work, was applied to the routine determination of Fe, Cu, Zn and Se in serum samples from normal individuals and cancer patients by total reflection X-ray fluorescence spectrometry. Samples were classified according to age and sex of the donor, in order to determine reference values for normal individuals. Results indicate that the Zn/Cu ratio and the Cu concentration could prove to be useful tools for cancer diagnosis. Significant differences in these parameters between the normal and cancer group were found for all age ranges. The multielemental character of the technique, coupled with the small amounts of sample required and the short analysis time make it a valuable tool in clinical analysis.
Information content of household-stratified epidemics.
Kinyanjui, T M; Pellis, L; House, T
2016-09-01
Household structure is a key driver of many infectious diseases, as well as a natural target for interventions such as vaccination programs. Many theoretical and conceptual advances on household-stratified epidemic models are relatively recent, but have successfully managed to increase the applicability of such models to practical problems. To be of maximum realism and hence benefit, they require parameterisation from epidemiological data, and while household-stratified final size data has been the traditional source, increasingly time-series infection data from households are becoming available. This paper is concerned with the design of studies aimed at collecting time-series epidemic data in order to maximize the amount of information available to calibrate household models. A design decision involves a trade-off between the number of households to enrol and the sampling frequency. Two commonly used epidemiological study designs are considered: cross-sectional, where different households are sampled at every time point, and cohort, where the same households are followed over the course of the study period. The search for an optimal design uses Bayesian computationally intensive methods to explore the joint parameter-design space combined with the Shannon entropy of the posteriors to estimate the amount of information in each design. For the cross-sectional design, the amount of information increases with the sampling intensity, i.e., the designs with the highest number of time points have the most information. On the other hand, the cohort design often exhibits a trade-off between the number of households sampled and the intensity of follow-up. Our results broadly support the choices made in existing epidemiological data collection studies. Prospective problem-specific use of our computational methods can bring significant benefits in guiding future study designs. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Gypsum treated fly ash as a liner for waste disposal facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sivapullaiah, Puvvadi V., E-mail: siva@civil.iisc.ernet.in; Baig, M. Arif Ali, E-mail: reach2arif@gmail.com
2011-02-15
Fly ash has potential application in the construction of base liners for waste containment facilities. While most of the fly ashes improve in the strength with curing, the ranges of permeabilities they attain may often not meet the basic requirement of a liner material. An attempt has been made in the present context to reduce the hydraulic conductivity by adding lime content up to 10% to two selected samples of class F fly ashes. The use of gypsum, which is known to accelerate the unconfined compressive strength by increasing the lime reactivity, has been investigated in further improving the hydraulicmore » conductivity. Hydraulic conductivities of the compacted specimens have been determined in the laboratory using the falling head method. It has been observed that the addition of gypsum reduces the hydraulic conductivity of the lime treated fly ashes. The reduction in the hydraulic conductivity of the samples containing gypsum is significantly more for samples with high amounts of lime contents (as high as 1000 times) than those fly ashes with lower amounts of lime. However there is a relatively more increase in the strengths of the samples with the inclusion of gypsum to the fly ashes at lower lime contents. This is due to the fact that excess lime added to fly ash is not effectively converted into pozzolanic compounds. Even the presence of gypsum is observed not to activate these reactions with excess lime. On the other hand the higher amount of lime in the presence of sulphate is observed to produce more cementitious compounds which block the pores in the fly ash. The consequent reduction in the hydraulic conductivity of fly ash would be beneficial in reducing the leachability of trace elements present in the fly ash when used as a base liner.« less
VALIDATION FOR THE PERMANGANATE DIGESTION OF REILLEX HPQ ANION RESIN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kyser, E.
2009-09-23
The flowsheet for the digestion of Reillex{trademark} HPQ was validated both under the traditional alkaline conditions and under strongly acidic conditions. Due to difficulty in performing a pH adjustment in the large tank where this flowsheet must be performed, the recommended digestion conditions were changed from pH 8-10 to 8 M HNO{sub 3}. Thus, no pH adjustment of the solution is required prior to performing the permanganate addition and digestion and the need to sample the digestion tank to confirm appropriate pH range for digestion may be avoided. Neutralization of the acidic digestion solution will be performed after completion ofmore » the resin digestion cycle. The amount of permanganate required for this type of resin (Reillex{trademark} HPQ) was increased from 1 kg/L resin to 4 kg/L resin to reduce the amount of residual resin solids to a minimal amount (<5%). The length of digestion time at 70 C remains unchanged at 15 hours. These parameters are not optimized but are expected to be adequate for the conditions. The flowsheet generates a significant amount of fine manganese dioxide (MnO{sub 2}) solids (1.71 kg/L resin) and involves the generation of a significant liquid volume due to the low solubility of permanganate. However, since only two batches of resin (40 L each) are expected to be digested, the total waste generated is limited.« less
Preservation of Liquid Biological Samples
NASA Technical Reports Server (NTRS)
Putcha, Lakshmi (Inventor); Nimmagudda, Ramalingeshwara R. (Inventor)
2000-01-01
The present invention provides a method of preserving a liquid biological sample, comprising the step of: contacting said liquid biological sample with a preservative comprising, sodium benzoate in an amount of at least about 0.15% of the sample (weight/volume) and citric acid in an amount of at least about 0.025% of the sample (weight/volume).
Cheow, Lih Feng; Viswanathan, Ramya; Chin, Chee-Sing; Jennifer, Nancy; Jones, Robert C; Guccione, Ernesto; Quake, Stephen R; Burkholder, William F
2014-10-07
Homogeneous assay platforms for measuring protein-ligand interactions are highly valued due to their potential for high-throughput screening. However, the implementation of these multiplexed assays in conventional microplate formats is considerably expensive due to the large amounts of reagents required and the need for automation. We implemented a homogeneous fluorescence anisotropy-based binding assay in an automated microfluidic chip to simultaneously interrogate >2300 pairwise interactions. We demonstrated the utility of this platform in determining the binding affinities between chromatin-regulatory proteins and different post-translationally modified histone peptides. The microfluidic chip assay produces comparable results to conventional microtiter plate assays, yet requires 2 orders of magnitude less sample and an order of magnitude fewer pipetting steps. This approach enables one to use small samples for medium-scale screening and could ease the bottleneck of large-scale protein purification.
Comparison of attrition test methods: ASTM standard fluidized bed vs jet cup
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, R.; Goodwin, J.G. Jr.; Jothimurugesan, K.
2000-05-01
Attrition resistance is one of the key design parameters for catalysts used in fluidized-bed and slurry phase types of reactors. The ASTM fluidized-bed test has been one of the most commonly used attrition resistance evaluation methods; however, it requires the use of 50 g samples--a large amount for catalyst development studies. Recently a test using the jet cup requiring only 5 g samples has been proposed. In the present study, two series of spray-dried iron catalysts were evaluated using both the ASTM fluidized-bed test and a test based on the jet cup to determine this comparability. It is shown thatmore » the two tests give comparable results. This paper, by reporting a comparison of the jet-cup test with the ASTM standard, provides a basis for utilizing the more efficient jet cup with confidence in catalyst attrition studies.« less
Be-7 as a tracer for short-term soil surface changes - opportunities and limitations
NASA Astrophysics Data System (ADS)
Baumgart, Philipp
2013-04-01
Within the last 20 years the cosmogenic nuclide Beryllium-7 was successfully established as a suitable tracer element to detect soil surface changes with a high accuracy. Particularly soil erosion rates from single precipitation events are in the focus of different studies due to the short radioactive half-life of the Be-7 isotope. High sorption at topmost soil particles and immobility at given pH-values enable fine-scaled erosion modelling down to 2 mm increments. But some important challenging limitations require particular attention, starting from sampling up to the final data evaluation. E.g. these are the realisation of the fine increment soil collection, the limiting amount of measurable samples per campaign due to the short radioactive half-life and the specific requirements for the detector measurements. Both, the high potential and the challenging limitations are presented as well as future perspectives of that tracer method.
34 CFR Appendix C to Part 379 - Calculating Required Matching Amount
Code of Federal Regulations, 2013 CFR
2013-07-01
... 34 Education 2 2013-07-01 2013-07-01 false Calculating Required Matching Amount C Appendix C to Part 379 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF.... C Appendix C to Part 379—Calculating Required Matching Amount 1. The method for calculating the...
34 CFR Appendix C to Part 379 - Calculating Required Matching Amount
Code of Federal Regulations, 2014 CFR
2014-07-01
... 34 Education 2 2014-07-01 2013-07-01 true Calculating Required Matching Amount C Appendix C to Part 379 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF.... C Appendix C to Part 379—Calculating Required Matching Amount 1. The method for calculating the...
Stability Analysis of ISS Medications
NASA Technical Reports Server (NTRS)
Wotring, V. E.
2014-01-01
It is known that medications degrade over time, and that extreme storage conditions will hasten their degradation. The temperature and humidity conditions of the ISS have been shown to be within the ideal ranges for medication storage, but the effects of other environmental factors, like elevated exposure to radiation, have not yet been evaluated. Current operational procedures ensure that ISS medications are re-stocked before expiration, but this may not be possible on long duration exploration missions. For this reason, medications that have experienced long duration storage on the ISS were returned to JSC for analysis to determine any unusual effects of aging in the low- Earth orbit environment. METHODS Medications were obtained by the JSC Pharmacy from commercial distributors and were re-packaged by JSC pharmacists to conserve up mass and volume. All medication doses were part of the ISS crew medical kit and were transported to the International Space Station (ISS) via NASA's Shuttle Transportation System (Space Shuttle). After 568 days of storage, the medications were removed from the supply chain and returned to Earth on a Dragon (SpaceX) capsule. Upon return to Earth, medications were transferred to temperature and humidity controlled environmental chambers until analysis. Nine medications were chosen on the basis of their availability for study. The medications included several of the most heavily used by US crewmembers: 2 sleep aids, 2 antihistamines/decongestants, 3 pain relievers, an antidiarrheal and an alertness medication. Each medication was available at a single time point; analysis of the same medication at multiple time points was not possible. Because the samples examined in this study were obtained opportunistically from medical supplies, there were no control samples available (i.e. samples aged for a similar period of time on the ground); a significant limitation of this study. Medications were analyzed using the HPLC/MS methods described in the United States Pharmacopeia (USP) to measure the amount of intact active ingredient, identify degradation products and measure their amounts. Some analyses were conducted by an independent analytical laboratory, but certain (Schedule) medications could not be shipped to their facility and were analyzed at JSC. RESULTS Nine medications were analyzed with respect to active pharmaceutical ingredient (API) and degradant amounts. Results were compared to the USP requirements for API and degradants/impurities content for every FDA-approved medication. One medication met USP requirements at 5 months after its expiration date. Four of the nine (44% of those tested) medications tested met USP requirements up to 8 months post-expiration. Another 3 medications (33% of those tested) met USP guidelines 2-3 months before expiration. One medication, a compound classed by the FDA as a dietary supplement and sometimes used as a sleep aid, failed to meet USP requirements at 11 months post-expiration. CONCLUSION Analysis of each medication at a single time point provides limited information on the stability of a medication stored in particular conditions; it is not possible to predict how long a medication may be safe and effective from these data. Notwithstanding, five of the nine medications tested (56%) met USP requirements for API and degradants/impurities at least 5 months past expiration dates. The single compound that failed to meet USP requirements is not regulated as strictly as prescription medications are during manufacture; it is unknown if this medication would have met the requirements prior to flight. Notably, it was the furthest beyond its expiration date. Only more comprehensive analysis of flight-aged samples compared to appropriate ground controls will permit determination of spaceflight effects on medication stability.
NASA Astrophysics Data System (ADS)
van Geldern, Robert; Myrttinen, Anssi; Becker, Veith; Barth, Johannes A. C.
2010-05-01
The isotopic composition (δ13C) of dissolved inorganic carbon (DIC), in combination with DIC concentration measurements, can be used to quantify geochemical trapping of CO2 in water. This is of great importance in monitoring the fate of CO2 in the subsurface in CO2 injection projects. When CO2 mixes with water, a shift in the δ13C values, as well as an increase in DIC concentrations is observed in the CO2-H2O system. However, when using standard on-site titration methods, it is often challenging to determining accurate in-situ DIC concentrations. This may be due to CO2 degassing and CO2-exchange between the sample and the atmosphere during titration, causing a change in the pH value or due to other unfavourable conditions such as turbid water samples or limited availability of fluid samples. A way to resolve this problem is by simultaneously determining the DIC concentration and carbon isotopic composition using a standard continuous flow Isotope Ratio Mass Spectrometry (CF-IRMS) setup with a Gasbench II coupled to Delta plusXP mass spectrometer. During sampling, in order to avoid atmospheric contact, water samples taken from the borehole-fluid-sampler should be directly transferred into a suitable container, such as a gasbag. Also, to avoid isotope fractionation due to biological activity in the sample, it is recommended to stabilize the gasbags prior to sampling with HgCl2 for the subsequent stable isotope analysis. The DIC concentration of the samples can be determined from the area of the sample peaks in a chromatogram from a CF-IRMS analysis, since it is directly proportional to the CO2 generated by the reaction of the water with H3PO4. A set of standards with known DIC concentrations should be prepared by mixing NaHCO3 with DIC free water. Since the DIC concentrations of samples taken from CO2 injection sites are expected to be exceptionally high due to the additional high amounts of added CO2, the DIC concentration range of the standards should be set high enough to cover the sample concentrations. In order to assure methodological reproducibility, this 'calibration set' should be included in every sequence analysed with the Gasbench CF-IRMS system. The standards, therefore, should also be treated in the same way as the samples. For accurate determination, it is essential to know the exact amount of water in the vial and the density of the sample. This requires weighing of each vial before and after injection of the water sample. For stable isotope analysis, the required signal height can be adjusted by the sample amount. Therefore this method is suitable for analysing samples with highly differing DIC concentrations. Reproducibility and accuracy of the quantitative analysis of the dissolved inorganic carbon need to be verified by independent control standards, treated as samples. This study was conducted as a part of the R&D programme CLEAN, which is funded by the German Federal Ministry of Education in the framework of the programme GEOTECHNOLOGIEN. We would like to thank GDF SUEZ for permitting us to conduct sampling campaigns at their site.
NASA Astrophysics Data System (ADS)
Sabatini, Francesca; Lluveras-Tenorio, Anna; Degano, Ilaria; Kuckova, Stepanka; Krizova, Iva; Colombini, Maria Perla
2016-11-01
This study deals with the identification of anthraquinoid molecular markers in standard dyes, reference lakes, and paint model systems using a micro-invasive and nondestructive technique such as matrix-assisted laser desorption/ionization time-of-flight-mass spectrometry (MALDI-ToF-MS). Red anthraquinoid lakes, such as madder lake, carmine lake, and Indian lac, have been the most widely used for painting purposes since ancient times. From an analytical point of view, identifying lakes in paint samples is challenging and developing methods that maximize the information achievable minimizing the amount of sample needed is of paramount importance. The employed method was tested on less than 0.5 mg of reference samples and required a minimal sample preparation, entailing a hydrofluoric acid extraction. The method is fast and versatile because of the possibility to re-analyze the same sample (once it has been spotted on the steel plate), testing both positive and negative modes in a few minutes. The MALDI mass spectra collected in the two analysis modes were studied and compared with LDI and simulated mass spectra in order to highlight the peculiar behavior of the anthraquinones in the MALDI process. Both ionization modes were assessed for each species. The effect of the different paint binders on dye identification was also evaluated through the analyses of paint model systems. In the end, the method was successful in detecting madder lake in archeological samples from Greek wall paintings and on an Italian funerary clay vessel, demonstrating its capabilities to identify dyes in small amount of highly degraded samples.
Comparing Cumberland With Other Samples Analyzed by Curiosity
2014-12-16
This graphic offers comparisons between the amount of an organic chemical named chlorobenzene detected in the Cumberland rock sample and amounts of it in samples from three other Martian surface targets analyzed by NASA Curiosity Mars rover.
Lignin Sensor Based On Flash-Pyrolysis Mass Spectrometry
NASA Technical Reports Server (NTRS)
Kwack, Eug Y.; Lawson, Daniel D.; Shakkottai, Parthasarathy
1990-01-01
New lignin sensor takes only few minutes to measure lignin content of specimen of wood, pulp, paper, or similar material. Includes flash pyrolizer and ion-trap detector that acts as mass spectrometer. Apparatus measures amount of molecular fragments of lignin in pyrolysis products of samples. Helpful in controlling digestors in paper mills to maintain required lignin content, and also in bleaching plants, where good control of bleaching becomes possible if quick determination of lignin content made.
A Dimensionality Reduction Technique for Enhancing Information Context.
1980-06-01
table, memory requirements for the difference arrays are based on the FORTRAN G programming languaee as implementated on an IBM 360/67. Single...the greatest amount of insight. All studies were performed on an IBM 360/67. Transformation 53 numerical results were produced as well as two...the origin to (19,19,19,19,19,19,19,19,19,l9). Two classes were generated in each case. The samples were synthetically derived using the IBM 360/57 and
45 CFR 160.404 - Amount of a civil money penalty.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 45 Public Welfare 1 2011-10-01 2011-10-01 false Amount of a civil money penalty. 160.404 Section... RELATED REQUIREMENTS GENERAL ADMINISTRATIVE REQUIREMENTS Imposition of Civil Money Penalties § 160.404 Amount of a civil money penalty. (a) The amount of a civil money penalty will be determined in accordance...
45 CFR 160.404 - Amount of a civil money penalty.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 1 2014-10-01 2014-10-01 false Amount of a civil money penalty. 160.404 Section... RELATED REQUIREMENTS GENERAL ADMINISTRATIVE REQUIREMENTS Imposition of Civil Money Penalties § 160.404 Amount of a civil money penalty. (a) The amount of a civil money penalty will be determined in accordance...
45 CFR 160.404 - Amount of a civil money penalty.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 45 Public Welfare 1 2012-10-01 2012-10-01 false Amount of a civil money penalty. 160.404 Section... RELATED REQUIREMENTS GENERAL ADMINISTRATIVE REQUIREMENTS Imposition of Civil Money Penalties § 160.404 Amount of a civil money penalty. (a) The amount of a civil money penalty will be determined in accordance...
45 CFR 160.404 - Amount of a civil money penalty.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 45 Public Welfare 1 2013-10-01 2013-10-01 false Amount of a civil money penalty. 160.404 Section... RELATED REQUIREMENTS GENERAL ADMINISTRATIVE REQUIREMENTS Imposition of Civil Money Penalties § 160.404 Amount of a civil money penalty. (a) The amount of a civil money penalty will be determined in accordance...
45 CFR 160.404 - Amount of a civil money penalty.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 1 2010-10-01 2010-10-01 false Amount of a civil money penalty. 160.404 Section... RELATED REQUIREMENTS GENERAL ADMINISTRATIVE REQUIREMENTS Imposition of Civil Money Penalties § 160.404 Amount of a civil money penalty. (a) The amount of a civil money penalty will be determined in accordance...
30 CFR 243.8 - When will MMS suspend my obligation to comply with an order?
Code of Federal Regulations, 2010 CFR
2010-07-01
...), and: (1) If the amount under appeal is less than $10,000 or does not require payment of a specified... the amount under appeal is less than $1,000 or does not require payment, MMS will suspend your... paying any demanded amount or complying with any other requirement pending appeal. However, voluntarily...
Andrews, Karen W; Roseland, Janet M; Gusev, Pavel A; Palachuvattil, Joel; Dang, Phuong T; Savarala, Sushma; Han, Fei; Pehrsson, Pamela R; Douglass, Larry W; Dwyer, Johanna T; Betz, Joseph M; Saldanha, Leila G; Bailey, Regan L
2017-01-01
Background: Multivitamin/mineral products (MVMs) are the dietary supplements most commonly used by US adults. During manufacturing, some ingredients are added in amounts exceeding the label claims to compensate for expected losses during the shelf life. Establishing the health benefits and harms of MVMs requires accurate estimates of nutrient intake from MVMs based on measures of actual rather than labeled ingredient amounts. Objectives: Our goals were to determine relations between analytically measured and labeled ingredient content and to compare adult MVM composition with Recommended Dietary Allowances (RDAs) and Tolerable Upper Intake Levels. Design: Adult MVMs were purchased while following a national sampling plan and chemically analyzed for vitamin and mineral content with certified reference materials in qualified laboratories. For each ingredient, predicted mean percentage differences between analytically obtained and labeled amounts were calculated with the use of regression equations. Results: For 12 of 18 nutrients, most products had labeled amounts at or above RDAs. The mean measured content of all ingredients (except thiamin) exceeded labeled amounts (overages). Predicted mean percentage differences exceeded labeled amounts by 1.5–13% for copper, manganese, magnesium, niacin, phosphorus, potassium, folic acid, riboflavin, and vitamins B-12, C, and E, and by ∼25% for selenium and iodine, regardless of labeled amount. In contrast, thiamin, vitamin B-6, calcium, iron, and zinc had linear or quadratic relations between the labeled and percentage differences, with ranges from −6.5% to 8.6%, −3.5% to 21%, 7.1% to 29.3%, −0.5% to 16.4%, and −1.9% to 8.1%, respectively. Analytically adjusted ingredient amounts are linked to adult MVMs reported in the NHANES 2003–2008 via the Dietary Supplement Ingredient Database (http://dsid.usda.nih.gov) to facilitate more accurate intake quantification. Conclusions: Vitamin and mineral overages were measured in adult MVMs, most of which already meet RDAs. Therefore, nutrient overexposures from supplements combined with typical food intake may have unintended health consequences, although this would require further examination. PMID:27974309
An integral conservative gridding--algorithm using Hermitian curve interpolation.
Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K
2008-11-07
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).
Yoshida, Jin; Koda, Shigeki; Nishida, Shozo; Yoshida, Toshiaki; Miyajima, Keiko; Kumagai, Shinji
2011-03-01
The aim of the present study was to evaluate the measurement of contamination by antineoplastic drugs for safer handling of such drugs by medical workers. We investigated the relationship between the contamination level of antineoplastic drugs and the conditions of their handling. Air samples and wipe samples were collected from equipment in the preparation rooms of five hospitals (hospitals A-E). These samples were subjected to measurement of the amounts of cyclophosphamide (CPA), fluorouracil (5FU), gemcitabine (GEM), and platinum-containing drugs (Pt). Twenty-four-hour urine samples were collected from the pharmacists who handled or audited, the antineoplastic drugs were analyzed for CPA and Pt. Pt was detected from air samples inside BSC in hospital B. Antineoplastic drugs were detected from wipe samples of the BSC in hospitals A, B, D, and E and of other equipment in the preparation rooms in hospitals A, B, C, and D. Cyclophosphamide and 5FU were detected from wipe samples of the air-conditioner filter in hospital A, and CPA was detected from that in hospital D. Cyclophosphamide was detected from urine samples of workers in hospitals B, D, and E. The contamination level of antineoplastic drugs was suggested to be related with the amount of drugs handled, cleaning methods of the equipment, and the skill level of the technique of maintaining negative pressure inside a vial. In order to reduce the contamination and exposure to antineoplastic drugs in the hospital work environment very close to zero, comprehensive safety precautions, including adequate mixing and cleaning methods was required in addition to BSC and closed system device.
The effect of changes to the method of estimating the pollen count from aerobiological samples.
Sikoparija, Branko; Pejak-Šikoparija, Tatjana; Radišić, Predrag; Smith, Matt; Soldevilla, Carmen Galán
2011-02-01
Pollen data have been recorded at Novi Sad in Serbia since 2000. The adopted method of producing pollen counts has been the use of five longitudinal transects that examine 19.64% of total sample surface. However, counting five transects is time consuming and so the main objective of this study is to investigate whether reducing the number to three or even two transects would have a significant effect on daily average and bi-hourly pollen concentrations, as well as the main characteristics of the pollen season and long-term trends. This study has shown that there is a loss of accuracy in daily average and bi-hourly pollen concentrations (an increase in % ERROR) as the sub-sampling area is reduced from five to three or two longitudinal transects. However, this loss of accuracy does not impact on the main characteristics of the season or long-term trends. As a result, this study can be used to justify changing the sub-sampling method used at Novi Sad from five to three longitudinal transects. The use of two longitudinal transects has been ruled out because, although quicker, the counts produced: (a) had the greatest amount of % ERROR, (b) altered the amount of influence of the independent variable on the dependent variable (the slope in regression analysis) and (c) the total sampled surface (7.86%) was less than the minimum requirement recommended by the European Aerobiology Society working group on Quality Control (at least 10% of total slide area).
Estimation of methanogen biomass via quantitation of coenzyme M
Elias, Dwayne A.; Krumholz, Lee R.; Tanner, Ralph S.; Suflita, Joseph M.
1999-01-01
Determination of the role of methanogenic bacteria in an anaerobic ecosystem often requires quantitation of the organisms. Because of the extreme oxygen sensitivity of these organisms and the inherent limitations of cultural techniques, an accurate biomass value is very difficult to obtain. We standardized a simple method for estimating methanogen biomass in a variety of environmental matrices. In this procedure we used the thiol biomarker coenzyme M (CoM) (2-mercaptoethanesulfonic acid), which is known to be present in all methanogenic bacteria. A high-performance liquid chromatography-based method for detecting thiols in pore water (A. Vairavamurthy and M. Mopper, Anal. Chim. Acta 78:363–370, 1990) was modified in order to quantify CoM in pure cultures, sediments, and sewage water samples. The identity of the CoM derivative was verified by using liquid chromatography-mass spectroscopy. The assay was linear for CoM amounts ranging from 2 to 2,000 pmol, and the detection limit was 2 pmol of CoM/ml of sample. CoM was not adsorbed to sediments. The methanogens tested contained an average of 19.5 nmol of CoM/mg of protein and 0.39 ± 0.07 fmol of CoM/cell. Environmental samples contained an average of 0.41 ± 0.17 fmol/cell based on most-probable-number estimates. CoM was extracted by using 1% tri-(N)-butylphosphine in isopropanol. More than 90% of the CoM was recovered from pure cultures and environmental samples. We observed no interference from sediments in the CoM recovery process, and the method could be completed aerobically within 3 h. Freezing sediment samples resulted in 46 to 83% decreases in the amounts of detectable CoM, whereas freezing had no effect on the amounts of CoM determined in pure cultures. The method described here provides a quick and relatively simple way to estimate methanogenic biomass.
Microvolume Protein Concentration Determination using the NanoDrop 2000c Spectrophotometer
Desjardins, Philippe; Hansen, Joel B.; Allen, Michael
2009-01-01
Traditional spectrophotometry requires placing samples into cuvettes or capillaries. This is often impractical due to the limited sample volumes often used for protein analysis. The Thermo Scientific NanoDrop 2000c Spectrophotometer solves this issue with an innovative sample retention system that holds microvolume samples between two measurement surfaces using the surface tension properties of liquids, enabling the quantification of samples in volumes as low as 0.5-2 μL. The elimination of cuvettes or capillaries allows real time changes in path length, which reduces the measurement time while greatly increasing the dynamic range of protein concentrations that can be measured. The need for dilutions is also eliminated, and preparations for sample quantification are relatively easy as the measurement surfaces can be simply wiped with laboratory wipe. This video article presents modifications to traditional protein concentration determination methods for quantification of microvolume amounts of protein using A280 absorbance readings or the BCA colorimetric assay. PMID:19890248
Metagenomics of rumen bacteriophage from thirteen lactating dairy cattle
2013-01-01
Background The bovine rumen hosts a diverse and complex community of Eukarya, Bacteria, Archea and viruses (including bacteriophage). The rumen viral population (the rumen virome) has received little attention compared to the rumen microbial population (the rumen microbiome). We used massively parallel sequencing of virus like particles to investigate the diversity of the rumen virome in thirteen lactating Australian Holstein dairy cattle all housed in the same location, 12 of which were sampled on the same day. Results Fourteen putative viral sequence fragments over 30 Kbp in length were assembled and annotated. Many of the putative genes in the assembled contigs showed no homology to previously annotated genes, highlighting the large amount of work still required to fully annotate the functions encoded in viral genomes. The abundance of the contig sequences varied widely between animals, even though the cattle were of the same age, stage of lactation and fed the same diets. Additionally the twelve animals which were co-habited shared a number of their dominant viral contigs. We compared the functional characteristics of our bovine viromes with that of other viromes, as well as rumen microbiomes. At the functional level, we found strong similarities between all of the viral samples, which were highly distinct from the rumen microbiome samples. Conclusions Our findings suggest a large amount of between animal variation in the bovine rumen virome and that co-habiting animals may have more similar viromes than non co-habited animals. We report the deepest sequencing to date of the rumen virome. This work highlights the enormous amount of novelty and variation present in the rumen virome. PMID:24180266
A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing
Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian
2016-01-01
Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623
Kopsch, Thomas; Murnane, Darragh; Symons, Digby
2017-08-30
In dry powder inhalers (DPIs) the patient's inhalation manoeuvre strongly influences the release of drug. Drug release from a DPI may also be influenced by the size of any air bypass incorporated in the device. If the amount of bypass is high less air flows through the entrainment geometry and the release rate is lower. In this study we propose to reduce the intra- and inter-patient variations of drug release by controlling the amount of air bypass in a DPI. A fast computational method is proposed that can predict how much bypass is needed for a specified drug delivery rate for a particular patient. This method uses a meta-model which was constructed using multiphase computational fluid dynamic (CFD) simulations. The meta-model is applied in an optimization framework to predict the required amount of bypass needed for drug delivery that is similar to a desired target release behaviour. The meta-model was successfully validated by comparing its predictions to results from additional CFD simulations. The optimization framework has been applied to identify the optimal amount of bypass needed for fictitious sample inhalation manoeuvres in order to deliver a target powder release profile for two patients. Copyright © 2017 Elsevier B.V. All rights reserved.
Identifying Etiological Agents Causing Diarrhea in Low Income Ecuadorian Communities
Vasco, Gabriela; Trueba, Gabriel; Atherton, Richard; Calvopiña, Manuel; Cevallos, William; Andrade, Thamara; Eguiguren, Martha; Eisenberg, Joseph N. S.
2014-01-01
Continued success in decreasing diarrheal disease burden requires targeted interventions. To develop such interventions, it is crucial to understand which pathogens cause diarrhea. Using a case-control design we tested stool samples, collected in both rural and urban Ecuador, for 15 pathogenic microorganisms. Pathogens were present in 51% of case and 27% of control samples from the urban community, and 62% of case and 18% of control samples collected from the rural community. Rotavirus and Shigellae were associated with diarrhea in the urban community; co-infections were more pathogenic than single infection; Campylobacter and Entamoeba histolytica were found in large numbers in cases and controls; and non-typhi Salmonella and enteropathogenic Escherichia coli were not found in any samples. Consistent with the Global Enteric Multicenter Study, focused in south Asia and sub-Saharan Africa, we found that in Ecuador a small group of pathogens accounted for a significant amount of the diarrheal disease burden. PMID:25048373
INVESTIGATION OF RESPONSE DIFFERENCES BETWEEN ...
Total organic carbon (TOC) and dissolved organic carbon (DOC) have long been used to estimate the amount of natural organic matter (NOM) found in raw and finished drinking water. In recent years, computer automation and improved instrumental analysis technologies have created a variety of TOC instrument systems. However, the amount of organic carbon (OC) measured in a sample has been found to depend upon the way a specific TOC instrument treats the sample and the way the OC is calculated and reported. Specifically, relative instrument response differences for TOC/DOC, ranging between 15 to 62%, were documented when five different source waters were each analyzed by five different TOC instrument systems operated according to the manufacturer's specifications. Problems and possible solutions for minimizing these differences are discussed. Establish optimum performance criteria for current TOC technologies for application to Stage 2 D/DBP Rule.Develop a TOC and SUVA (incorporating DOC and UV254) method to be published in the Stage 2 D/DBP Rule that will meet requirements as stated in the Stage 1 D/DBP Rule (Revise Method 415.3,
Pereira, Thulio C; Conceição, Carlos A F; Khan, Alamgir; Fernandes, Raquel M T; Ferreira, Maira S; Marques, Edmar P; Marques, Aldaléa L B
2016-11-05
Microemulsions are thermodynamically stable systems of two immiscible liquids, one aqueous and the other of organic nature, with a surfactant and/or co-surfactant adsorbed in the interface between the two phases. Biodiesel-based microemulsions, consisting of alkyl esters of fatty acids, open a new means of analysis for the application of electroanalytical techniques, and is advantageous as it eliminates the required pre-treatment of a sample. In this work, the phase behaviours of biodiesel-based microemulsions were investigated through the electrochemical impedance spectroscopy (EIS) technique. We observed thatan increase in the amount of biodiesel in the microemulsion formulation increases the resistance to charge transfer at the interface. Also, the electrical conductivity measurements revealed that a decrease or increase in electrical properties depends on the amount of biodiesel. EIS studies of the biodiesel-based microemulsion samples showed the presence of two capacitive arcs: one high-frequency and the other low-frequency. Thus, the formulation of microemulsions plays an important role in estimating the electrical properties through the electrochemical impedance spectroscopy technique. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Pereira, Thulio C.; Conceição, Carlos A. F.; Khan, Alamgir; Fernandes, Raquel M. T.; Ferreira, Maira S.; Marques, Edmar P.; Marques, Aldaléa L. B.
2016-11-01
Microemulsions are thermodynamically stable systems of two immiscible liquids, one aqueous and the other of organic nature, with a surfactant and/or co-surfactant adsorbed in the interface between the two phases. Biodiesel-based microemulsions, consisting of alkyl esters of fatty acids, open a new means of analysis for the application of electroanalytical techniques, and is advantageous as it eliminates the required pre-treatment of a sample. In this work, the phase behaviours of biodiesel-based microemulsions were investigated through the electrochemical impedance spectroscopy (EIS) technique. We observed thatan increase in the amount of biodiesel in the microemulsion formulation increases the resistance to charge transfer at the interface. Also, the electrical conductivity measurements revealed that a decrease or increase in electrical properties depends on the amount of biodiesel. EIS studies of the biodiesel-based microemulsion samples showed the presence of two capacitive arcs: one high-frequency and the other low-frequency. Thus, the formulation of microemulsions plays an important role in estimating the electrical properties through the electrochemical impedance spectroscopy technique.
Liu, Q; Wang, L; Willson, P; Babiuk, L A
2000-09-01
A competitive PCR (cPCR) assay was developed for monitoring porcine circovirus (PCV) DNA in serum samples from piglets. The cPCR was based on competitive coamplification of a 502- or 506-bp region of the PCV type 1 (PCV1) or PCV2 ORF2, respectively, with a known concentration of competitor DNA, which produced a 761- or 765-bp fragment, respectively. The cPCR was validated by quantification of a known amount of PCV wild-type plasmids. We also used this technique to determine PCV genome copy numbers in infected cells. Furthermore, we measured PCV DNA loads in clinical samples. More than 50% of clinically healthy piglets could harbor both types of PCV. While PCV1 was detected in only 3 of 16 pigs with postweaning multisystemic wasting syndrome (PMWS), all the sick piglets contained PCV2. A comparison of the PCV2 DNA loads of healthy and sick animals revealed a significant difference, indicating that the development of PMWS may require a certain amount of PCV2.
Hexamethyldisilazane Removal with Mesoporous Materials Prepared from Calcium Fluoride Sludge.
Kao, Ching-Yang; Lin, Min-Fa; Nguyen, Nhat-Thien; Tsai, Hsiao-Hsin; Chang, Luh-Maan; Chen, Po-Han; Chang, Chang-Tang
2018-05-01
A large amount of calcium fluoride sludge is generated by the semiconductor industry every year. It also requires a high amount of fuel consumption using rotor concentrators and thermal oxidizers to treat VOCs. The mesoporous adsorbent prepared by calcium fluoride sludge was used for VOCs treatment. The semiconductor industry employs HMDS to promote the adhesion of photo-resistant material to oxide(s) due to the formation of silicon dioxide, which blocks porous adsorbents. The adsorption of HMDS (Hexamethyldisiloxane) was tested with mesoporous silica materials synthesized from calcium fluoride (CF-MCM). The resulting samples were characterized by XRD, XRF, FTIR, N2-adsorption-desorption techniques. The prepared samples possessed high specific surface area, large pore volume and large pore diameter. The crystal patterns of CF-MCM were similar with Mobil composite matter (MCM-41) from TEM image. The adsorption capacity of HMDS with CF-MCM was 40 and 80 mg g-1, respectively, under 100 and 500 ppm HMDS. The effects of operation parameters, such as contact time and mixture concentration, on the performance of CF-MCM were also discussed in this study.
Headspace Solid Phase Micro Extraction Gas Chromatographic Determination of Fenthion in Human Serum
Kasiotis, Konstantinos M.; Souki, Helen; Tsakirakis, Angelos N.; Carageorgiou, Haris; Theotokatos, Spiridon A.; Haroutounian, Serkos A.; Machera, Kyriaki
2008-01-01
A simple and effective analytical procedure was developed for the determination of fenthion residues in human serum samples. The sample treatment was performed using the headspace solid-phase micro extraction with polyacrylate fiber, which has the advantage to require low amount of serum (1 mL) without tedious pre-treatment. The quantification of fenthion was carried out by gas chromatography-mass spectrometry and the recoveries ranged from 79 to 104% at two spiking levels for 6 replicates. Detection and quantification limits were calculated as 1.51 and 4.54 ng/mL of serum respectively. Two fenthion metabolites fenoxon and fenthion–sulfoxide were also identified. PMID:19325792
Some thoughts on problems associated with various sampling media used for environmental monitoring
Horowitz, A.J.
1997-01-01
Modern analytical instrumentation is capable of measuring a variety of trace elements at concentrations down into the single or double digit parts-per-trillion (ng l-1) range. This holds for the three most common sample media currently used in environmental monitoring programs: filtered water, whole-water and separated suspended sediment. Unfortunately, current analytical capabilities have exceeded the current capacity to collect both uncontaminated and representative environmental samples. The success of any trace element monitoring program requires that this issue be both understood and addressed. The environmental monitoring of trace elements requires the collection of calendar- and event-based dissolved and suspended sediment samples. There are unique problems associated with the collection and chemical analyses of both types of sample media. Over the past 10 years, reported ambient dissolved trace element concentrations have declined. Generally, these decreases do not reflect better water quality, but rather improvements in the procedures used to collect, process, preserve and analyze these samples without contaminating them during these steps. Further, recent studies have shown that the currently accepted operational definition of dissolved constituents (material passing a 0.45 ??m membrane filter) is inadequat owing to sampling and processing artifacts. The existence of these artifacts raises questions about the generation of accurate, precise and comparable 'dissolved' trace element data. Suspended sediment and associated trace elements can display marked short- and long-term spatial and temporal variability. This implies that spatially representative samples only can be obtained by generating composites using depth- and width-integrated sampling techniques. Additionally, temporal variations have led to the view that the determination of annual trace element fluxes may require nearly constant (e.g., high-frequency) sampling and subsequent chemical analyses. Ultimately, sampling frequency for flux estimates becomes dependent on the time period of concern (daily, weekly, monthly, yearly) and the amount of acceptable error associated with these estimates.
Zhou, Ruokun; Li, Liang
2015-04-06
The effect of sample injection amount on metabolome analysis in a chemical isotope labeling (CIL) liquid chromatography-mass spectrometry (LC-MS) platform was investigated. The performance of time-of-flight (TOF) mass spectrometers with and without a high-dynamic-range (HD) detection system was compared in the analysis of (12)C2/(13)C2-dansyl labeled human urine samples. An average of 1635 ± 21 (n = 3) peak pairs or putative metabolites was detected using the HD-TOF-MS, compared to 1429 ± 37 peak pairs from a conventional or non-HD TOF-MS. In both instruments, signal saturation was observed. However, in the HD-TOF-MS, signal saturation was mainly caused by the ionization process, while in the non-HD TOF-MS, it was caused by the detection process. To extend the MS detection range in the non-HD TOF-MS, an automated switching from using (12)C to (13)C-natural abundance peaks for peak ratio calculation when the (12)C peaks are saturated has been implemented in IsoMS, a software tool for processing CIL LC-MS data. This work illustrates that injecting an optimal sample amount is important to maximize the metabolome coverage while avoiding the sample carryover problem often associated with over-injection. A TOF mass spectrometer with an enhanced detection dynamic range can also significantly increase the number of peak pairs detected. In chemical isotope labeling (CIL) LC-MS, relative metabolite quantification is done by measuring the peak ratio of a (13)C2-/(12)C2-labeled peak pair for a given metabolite present in two comparative samples. The dynamic range of peak ratio measurement does not need to be very large, as only subtle changes of metabolite concentrations are encountered in most metabolomic studies where relative metabolome quantification of different groups of samples is performed. However, the absolute concentrations of different metabolites can be very different, requiring a technique to provide a wide detection dynamic range to allow the detection of as many peak pairs as possible. In this work, we demonstrated that controlling the sample injection amount into LC-MS was critical to achieve the optimal detectability while avoiding sample carry-over problem. In addition, the use of a high-dynamic-range TOF system increased the number of peak pairs detected, compared to a conventional TOF system. We also investigated the ionization and detection saturation factors limiting the dynamic range of detection. This article is part of a Special Issue entitled: Protein dynamics in health and disease. Guest Editors: Pierre Thibault and Anne-Claude Gingras. Copyright © 2014 Elsevier B.V. All rights reserved.
Pavlou, Andrew T.; Ji, Wei; Brown, Forrest B.
2016-01-23
Here, a proper treatment of thermal neutron scattering requires accounting for chemical binding through a scattering law S(α,β,T). Monte Carlo codes sample the secondary neutron energy and angle after a thermal scattering event from probability tables generated from S(α,β,T) tables at discrete temperatures, requiring a large amount of data for multiscale and multiphysics problems with detailed temperature gradients. We have previously developed a method to handle this temperature dependence on-the-fly during the Monte Carlo random walk using polynomial expansions in 1/T to directly sample the secondary energy and angle. In this paper, the on-the-fly method is implemented into MCNP6 andmore » tested in both graphite-moderated and light water-moderated systems. The on-the-fly method is compared with the thermal ACE libraries that come standard with MCNP6, yielding good agreement with integral reactor quantities like k-eigenvalue and differential quantities like single-scatter secondary energy and angle distributions. The simulation runtimes are comparable between the two methods (on the order of 5–15% difference for the problems tested) and the on-the-fly fit coefficients only require 5–15 MB of total data storage.« less
Griffiths, Ronald E.; Topping, David J.; Anderson, Robert S.; Hancock, Gregory S.; Melis, Theodore S.
2014-01-01
Management of sediment in rivers downstream from dams requires knowledge of both the sediment supply and downstream sediment transport. In some dam-regulated rivers, the amount of sediment supplied by easily measured major tributaries may overwhelm the amount of sediment supplied by the more difficult to measure lesser tributaries. In this first class of rivers, managers need only know the amount of sediment supplied by these major tributaries. However, in other regulated rivers, the cumulative amount of sediment supplied by the lesser tributaries may approach the total supplied by the major tributaries. The Colorado River downstream from Glen Canyon has been hypothesized to be one such river. If this is correct, then management of sediment in the Colorado River in the part of Glen Canyon National Recreation Area downstream from the dam and in Grand Canyon National Park may require knowledge of the sediment supply from all tributaries. Although two major tributaries, the Paria and Little Colorado Rivers, are well documented as the largest two suppliers of sediment to the Colorado River downstream from Glen Canyon Dam, the contributions of sediment supplied by the ephemeral lesser tributaries of the Colorado River in the lowermost Glen Canyon, and Marble and Grand Canyons are much less constrained. Previous studies have estimated amounts of sediment supplied by these tributaries ranging from very little to almost as much as the amount supplied by the Paria River. Because none of these previous studies relied on direct measurement of sediment transport in any of the ephemeral tributaries in Glen, Marble, or Grand Canyons, there may be significant errors in the magnitudes of sediment supplies estimated during these studies. To reduce the uncertainty in the sediment supply by better constraining the sediment yield of the ephemeral lesser tributaries, the U.S. Geological Survey Grand Canyon Monitoring and Research Center established eight sediment-monitoring gaging stations beginning in 2000 on the larger of the previously ungaged tributaries of the Colorado River downstream from Glen Canyon Dam. The sediment-monitoring gaging stations consist of a downward-looking stage sensor and passive suspended-sediment samplers. Two stations are equipped with automatic pump samplers to collect suspended-sediment samples during flood events. Directly measuring discharge and collecting suspended-sediment samples in these remote ephemeral streams during significant sediment-transporting events is nearly impossible; most significant run-off events are short-duration events (lasting minutes to hours) associated with summer thunderstorms. As the remote locations and short duration of these floods make it prohibitively expensive, if not impossible, to directly measure the discharge of water or collect traditional depth-integrated suspended-sediment samples, a method of calculating sediment loads was developed that includes documentation of stream stages by field instrumentation, modeling of discharges associated with these stages, and automatic suspended-sediment measurements. The approach developed is as follows (1) survey and model flood high-water marks using a two-dimensional hydrodynamic model, (2) create a stage-discharge relation for each site by combining the modeled flood flows with the measured stage record, (3) calculate the discharge record for each site using the stage-discharge relation and the measured stage record, and (4) calculate the instantaneous and cumulative sediment loads using the discharge record and suspended-sediment concentrations measured from samples collected with passive US U-59 samplers and ISCOTM pump samplers. This paper presents the design of the gaging network and briefly describes the methods used to calculate discharge and sediment loads. The design and methods herein can easily be used at other remote locations where discharge and sediment loads are required.
Cankar, Katarina; Štebih, Dejan; Dreo, Tanja; Žel, Jana; Gruden, Kristina
2006-01-01
Background Real-time PCR is the technique of choice for nucleic acid quantification. In the field of detection of genetically modified organisms (GMOs) quantification of biotech products may be required to fulfil legislative requirements. However, successful quantification depends crucially on the quality of the sample DNA analyzed. Methods for GMO detection are generally validated on certified reference materials that are in the form of powdered grain material, while detection in routine laboratories must be performed on a wide variety of sample matrixes. Due to food processing, the DNA in sample matrixes can be present in low amounts and also degraded. In addition, molecules of plant origin or from other sources that affect PCR amplification of samples will influence the reliability of the quantification. Further, the wide variety of sample matrixes presents a challenge for detection laboratories. The extraction method must ensure high yield and quality of the DNA obtained and must be carefully selected, since even components of DNA extraction solutions can influence PCR reactions. GMO quantification is based on a standard curve, therefore similarity of PCR efficiency for the sample and standard reference material is a prerequisite for exact quantification. Little information on the performance of real-time PCR on samples of different matrixes is available. Results Five commonly used DNA extraction techniques were compared and their suitability for quantitative analysis was assessed. The effect of sample matrix on nucleic acid quantification was assessed by comparing 4 maize and 4 soybean matrixes. In addition 205 maize and soybean samples from routine analysis were analyzed for PCR efficiency to assess variability of PCR performance within each sample matrix. Together with the amount of DNA needed for reliable quantification, PCR efficiency is the crucial parameter determining the reliability of quantitative results, therefore it was chosen as the primary criterion by which to evaluate the quality and performance on different matrixes and extraction techniques. The effect of PCR efficiency on the resulting GMO content is demonstrated. Conclusion The crucial influence of extraction technique and sample matrix properties on the results of GMO quantification is demonstrated. Appropriate extraction techniques for each matrix need to be determined to achieve accurate DNA quantification. Nevertheless, as it is shown that in the area of food and feed testing matrix with certain specificities is impossible to define strict quality controls need to be introduced to monitor PCR. The results of our study are also applicable to other fields of quantitative testing by real-time PCR. PMID:16907967
Color and Vector Flow Imaging in Parallel Ultrasound With Sub-Nyquist Sampling.
Madiena, Craig; Faurie, Julia; Poree, Jonathan; Garcia, Damien; Garcia, Damien; Madiena, Craig; Faurie, Julia; Poree, Jonathan
2018-05-01
RF acquisition with a high-performance multichannel ultrasound system generates massive data sets in short periods of time, especially in "ultrafast" ultrasound when digital receive beamforming is required. Sampling at a rate four times the carrier frequency is the standard procedure since this rule complies with the Nyquist-Shannon sampling theorem and simplifies quadrature sampling. Bandpass sampling (or undersampling) outputs a bandpass signal at a rate lower than the maximal frequency without harmful aliasing. Advantages over Nyquist sampling are reduced storage volumes and data workflow, and simplified digital signal processing tasks. We used RF undersampling in color flow imaging (CFI) and vector flow imaging (VFI) to decrease data volume significantly (factor of 3 to 13 in our configurations). CFI and VFI with Nyquist and sub-Nyquist samplings were compared in vitro and in vivo. The estimate errors due to undersampling were small or marginal, which illustrates that Doppler and vector Doppler images can be correctly computed with a drastically reduced amount of RF samples. Undersampling can be a method of choice in CFI and VFI to avoid information overload and reduce data transfer and storage.
Rapid micro-scale proteolysis of proteins for MALDI-MS peptide mapping using immobilized trypsin
NASA Astrophysics Data System (ADS)
Gobom, Johan; Nordhoff, Eckhard; Ekman, Rolf; Roepstorff, Peter
1997-12-01
In this study we present a rapid method for tryptic digestion of proteins using micro-columns with enzyme immobilized on perfusion chromatography media. The performance of the method is exemplified with acyl-CoA-binding protein and reduced carbamidomethylated bovine serum albumin. The method proved to be significantly faster and yielded a better sequence coverage and an improved signal-to-noise ratio for the MALDI-MS peptide maps, compared to in-solution- and on-target digestion. Only a single sample transfer step is required, and therefore sample loss due to adsorption to surfaces is reduced, which is a critical issue when handling low picomole to femtomole amounts of proteins. An example is shown with on-column proteolytic digestion and subsequent elution of the digest into a reversed-phase micro-column. This is useful if the sample contains large amounts of salt or is too diluted for MALDI-MS analysis. Furthermore, by step-wise elution from the reversedphase column, a complex digest can be fractionated, which reduces signal suppression and facilitates data interpretation in the subsequent MS-analysis. The method also proved useful for consecutive digestions with enzymes of different cleavage specificity. This is exemplified with on-column tryptic digestion, followed by reversed-phase step-wise elution, and subsequent on-target V8 protease digestion.
Mas, Sergi; Crescenti, Anna; Gassó, Patricia; Vidal-Taboada, Jose M; Lafuente, Amalia
2007-08-01
As pharmacogenetic studies frequently require establishment of DNA banks containing large cohorts with multi-centric designs, inexpensive methods for collecting and storing high-quality DNA are needed. The aims of this study were two-fold: to compare the amount and quality of DNA obtained from two different DNA cards (IsoCode Cards or FTA Classic Cards, Whatman plc, Brentford, Middlesex, UK); and to evaluate the effects of time and storage temperature, as well as the influence of anticoagulant ethylenediaminetetraacetic acid on the DNA elution procedure. The samples were genotyped by several methods typically used in pharmacogenetic studies: multiplex PCR, PCR-restriction fragment length polymorphism, single nucleotide primer extension, and allelic discrimination assay. In addition, they were amplified by whole genome amplification to increase genomic DNA mass. Time, storage temperature and ethylenediaminetetraacetic acid had no significant effects on either DNA card. This study reveals the importance of drying blood spots prior to isolation to avoid haemoglobin interference. Moreover, our results demonstrate that re-isolation protocols could be applied to increase the amount of DNA recovered. The samples analysed were accurately genotyped with all the methods examined herein. In conclusion, our study shows that both DNA cards, IsoCode Cards and FTA Classic Cards, facilitate genetic and pharmacogenetic testing for routine clinical practice.
Wotring, Virginia E
2016-01-01
Medications degrade over time, and degradation is hastened by extreme storage conditions. Current procedures ensure that medications aboard the International Space Station (ISS) are restocked before their expiration dates, but resupply may not be possible on future long-duration exploration missions. For this reason, medications stored on the ISS were returned to Earth for analysis. This was an opportunistic, observational pilot-scale investigation to test the hypothesis that ISS-aging does not cause unusual degradation. Nine medications were analyzed for active pharmaceutical ingredient (API) content and degradant amounts; results were compared to 2012 United States Pharmacopeia (USP) requirements. The medications were two sleep aids, two antihistamines/decongestants, three pain relievers, an antidiarrheal, and an alertness medication. Because the samples were obtained opportunistically from unused medical supplies, each medication was available at only 1 time point and no control samples (samples aged for a similar period on Earth) were available. One medication met USP requirements 5 months after its expiration date. Four of the nine (44% of those tested) medications tested met USP requirements 8 months post expiration. Another three medications (33%) met USP guidelines 2-3 months before expiration. One compound, a dietary supplement used as a sleep aid, failed to meet USP requirements at 11 months post expiration. No unusual degradation products were identified. Limited, evidence-based extension of medication shelf-lives may be possible and would be useful in preparation for lengthy exploration missions. Only analysis of flight-aged samples compared to appropriately matched ground controls will permit determination of the spaceflight environment on medication stability.
Elmer-Dixon, Margaret M; Bowler, Bruce E
2018-05-19
A novel approach to quantify mixed lipid systems is described. Traditional approaches to lipid vesicle quantification are time consuming, require large amounts of material and are destructive. We extend our recently described method for quantification of pure lipid systems to mixed lipid systems. The method only requires a UV-Vis spectrometer and does not destroy sample. Mie scattering data from absorbance measurements are used as input into a Matlab program to calculate the total vesicle concentration and the concentrations of each lipid in the mixed lipid system. The technique is fast and accurate, which is essential for analytical lipid binding experiments. Copyright © 2018. Published by Elsevier Inc.
Model-Based Design of Long-Distance Tracer Transport Experiments in Plants.
Bühler, Jonas; von Lieres, Eric; Huber, Gregor J
2018-01-01
Studies of long-distance transport of tracer isotopes in plants offer a high potential for functional phenotyping, but so far measurement time is a bottleneck because continuous time series of at least 1 h are required to obtain reliable estimates of transport properties. Hence, usual throughput values are between 0.5 and 1 samples h -1 . Here, we propose to increase sample throughput by introducing temporal gaps in the data acquisition of each plant sample and measuring multiple plants one after each other in a rotating scheme. In contrast to common time series analysis methods, mechanistic tracer transport models allow the analysis of interrupted time series. The uncertainties of the model parameter estimates are used as a measure of how much information was lost compared to complete time series. A case study was set up to systematically investigate different experimental schedules for different throughput scenarios ranging from 1 to 12 samples h -1 . Selected designs with only a small amount of data points were found to be sufficient for an adequate parameter estimation, implying that the presented approach enables a substantial increase of sample throughput. The presented general framework for automated generation and evaluation of experimental schedules allows the determination of a maximal sample throughput and the respective optimal measurement schedule depending on the required statistical reliability of data acquired by future experiments.
Improved Detection Technique for Solvent Rinse Cleanliness Verification
NASA Technical Reports Server (NTRS)
Hornung, S. D.; Beeson, H. D.
2001-01-01
The NASA White Sands Test Facility (WSTF) has an ongoing effort to reduce or eliminate usage of cleaning solvents such as CFC-113 and its replacements. These solvents are used in the final clean and cleanliness verification processes for flight and ground support hardware, especially for oxygen systems where organic contaminants can pose an ignition hazard. For the final cleanliness verification in the standard process, the equivalent of one square foot of surface area of parts is rinsed with the solvent, and the final 100 mL of the rinse is captured. The amount of nonvolatile residue (NVR) in the solvent is determined by weight after the evaporation of the solvent. An improved process of sampling this rinse, developed at WSTF, requires evaporation of less than 2 mL of the solvent to make the cleanliness verification. Small amounts of the solvent are evaporated in a clean stainless steel cup, and the cleanliness of the stainless steel cup is measured using a commercially available surface quality monitor. The effectiveness of this new cleanliness verification technique was compared to the accepted NVR sampling procedures. Testing with known contaminants in solution, such as hydraulic fluid, fluorinated lubricants, and cutting and lubricating oils, was performed to establish a correlation between amount in solution and the process response. This report presents the approach and results and discusses the issues in establishing the surface quality monitor-based cleanliness verification.
Lim, Chi Young; Kim, Jung-Yeon; Yoon, Mi-Jin; Chang, Hang Seok; Park, Cheong Soo; Chung, Woong Youn
2015-07-01
The radioiodine ablation therapy is required for patients who underwent a total thyroidectomy. Through a comparative review of a low iodine diet (LID) and a restricted iodine diet (RID), the study aims to suggest guidelines that are suitable for the conditions of Korea. The study was conducted with 101 patients. With 24-hour urine samples from the patients after a 2-week restricted diet and after a 4-week restricted diet, the amount of iodine in the urine was estimated. The consumed radioiodine amounts for 2 hours and 24 hours were calculated. This study was conducted with 47 LID patients and 54 RID patients. The amounts of iodine in urine, the 2-week case and 4-week case for each group showed no significant differences. The amounts of iodine in urine between the two groups were both included in the range of the criteria for radioiodine ablation therapy. Also, 2 hours and 24 hours radioiodine consumption measured after 4-week restrictive diet did not show statistical differences between two groups. A 2-week RID can be considered as a type of radioiodine ablation therapy after patients undergo a total thyroidectomy.
Assessment of hydrocarbon source rock potential of Polish bituminous coals and carbonaceous shales
Kotarba, M.J.; Clayton, J.L.; Rice, D.D.; Wagner, M.
2002-01-01
We analyzed 40 coal samples and 45 carbonaceous shale samples of varying thermal maturity (vitrinite reflectance 0.59% to 4.28%) from the Upper Carboniferous coal-bearing strata of the Upper Silesian, Lower Silesian, and Lublin basins, Poland, to evaluate their potential for generation and expulsion of gaseous and liquid hydrocarbons. We evaluated source rock potential based on Rock-Eval pyrolysis yield, elemental composition (atomic H/C and O/C), and solvent extraction yields of bitumen. An attempt was made to relate maceral composition to these source rock parameters and to composition of the organic matter and likely biological precursors. A few carbonaceous shale samples contain sufficient generation potential (pyrolysis assay and elemental composition) to be considered potential source rocks, although the extractable hydrocarbon and bitumen yields are lower than those reported in previous studies for effective Type III source rocks. Most samples analysed contain insufficient capacity for generation of hydrocarbons to reach thresholds required for expulsion (primary migration) to occur. In view of these findings, it is improbable that any of the coals or carbonaceous shales at the sites sampled in our study would be capable of expelling commercial amounts of oil. Inasmuch as a few samples contained sufficient generation capacity to be considered potential source rocks, it is possible that some locations or stratigraphic zones within the coals and shales could have favourable potential, but could not be clearly delimited with the number of samples analysed in our study. Because of their high heteroatomic content and high amount of asphaltenes, the bitumens contained in the coals are less capable of generating hydrocarbons even under optimal thermal conditions than their counterpart bitumens in the shales which have a lower heteroatomic content. Published by Elsevier Science B.V.
Sabatini, Francesca; Lluveras-Tenorio, Anna; Degano, Ilaria; Kuckova, Stepanka; Krizova, Iva; Colombini, Maria Perla
2016-11-01
This study deals with the identification of anthraquinoid molecular markers in standard dyes, reference lakes, and paint model systems using a micro-invasive and nondestructive technique such as matrix-assisted laser desorption/ionization time-of-flight-mass spectrometry (MALDI-ToF-MS). Red anthraquinoid lakes, such as madder lake, carmine lake, and Indian lac, have been the most widely used for painting purposes since ancient times. From an analytical point of view, identifying lakes in paint samples is challenging and developing methods that maximize the information achievable minimizing the amount of sample needed is of paramount importance. The employed method was tested on less than 0.5 mg of reference samples and required a minimal sample preparation, entailing a hydrofluoric acid extraction. The method is fast and versatile because of the possibility to re-analyze the same sample (once it has been spotted on the steel plate), testing both positive and negative modes in a few minutes. The MALDI mass spectra collected in the two analysis modes were studied and compared with LDI and simulated mass spectra in order to highlight the peculiar behavior of the anthraquinones in the MALDI process. Both ionization modes were assessed for each species. The effect of the different paint binders on dye identification was also evaluated through the analyses of paint model systems. In the end, the method was successful in detecting madder lake in archeological samples from Greek wall paintings and on an Italian funerary clay vessel, demonstrating its capabilities to identify dyes in small amount of highly degraded samples. Graphical Abstract ᅟ.
Alkaline thermal sludge hydrolysis.
Neyens, E; Baeyens, J; Creemers, C
2003-02-28
The waste activated sludge (WAS) treatment of wastewater produces excess sludge which needs further treatment prior to disposal or incineration. A reduction in the amount of excess sludge produced, and the increased dewaterability of the sludge are, therefore, subject of renewed attention and research. A lot of research covers the nature of the sludge solids and associated water. An improved dewaterability requires the disruption of the sludge cell structure. Previous investigations are reviewed in the paper. Thermal hydrolysis is recognized as having the best potential to meet the objectives and acid thermal hydrolysis is most frequently used, despite its serious drawbacks (corrosion, required post-neutralization, solubilization of heavy metals and phosphates, etc.). Alkaline thermal hydrolysis has been studied to a lesser extent, and is the subject of the detailed laboratory-scale research reported in this paper. After assessing the effect of monovalent/divalent cations (respectively, K(+)/Na(+) and Ca(2+)/Mg(2+)) on the sludge dewaterability, only the use of Ca(2+) appears to offer the best solution. The lesser effects of K(+), Na(+) and Mg(2+) confirm previous experimental findings. As a result of the experimental investigations, it can be concluded that alkaline thermal hydrolysis using Ca(OH)(2) is efficient in reducing the residual sludge amounts and in improving the dewaterability. The objectives are fully met at a temperature of 100 degrees C; at a pH approximately 10 and for a 60-min reaction time, where all pathogens are moreover killed. Under these optimum conditions, the rate of mechanical dewatering increases (the capillary suction time (CST) value is decreased from approximately 34s for the initial untreated sample to approximately 22s for the hydrolyzed sludge sample) and the amount of DS to be dewatered is reduced to approximately 60% of the initial untreated amount. The DS-content of the dewatered cake will be increased from 28 (untreated) to 46%.Finally, the mass and energy balances of a wastewater treatment plant with/without advanced sludge treatment (AST) are compared. The data clearly illustrate the benefits of using an alkaline AST-step in the system.
A technique for estimating seed production of common moist soil plants
Laubhan, Murray K.
1992-01-01
Seeds of native herbaceous vegetation adapted to germination in hydric soils (i.e., moist-soil plants) provide waterfowl with nutritional resources including essential amino acids, vitamins, and minerals that occur only in small amounts or are absent in other foods. These elements are essential for waterfowl to successfully complete aspects of the annual cycle such as molt and reproduction. Moist-soil vegetation also has the advantages of consistent production of foods across years with varying water availability, low management costs, high tolerance to diverse environmental conditions, and low deterioration rates of seeds after flooding. The amount of seed produced differs among plant species and varies annually depending on environmental conditions and management practices. Further, many moist-soil impoundments contain diverse vegetation, and seed production by a particular plant species usually is not uniform across an entire unit. Consequently, estimating total seed production within an impoundment is extremely difficult. The chemical composition of seeds also varies among plant species. For example, beggartick seeds contain high amounts of protein but only an intermediate amount of minerals. In contrast, barnyardgrass is a good source of minerals but is low in protein. Because of these differences, it is necessary to know the amount of seed produced by each plant species if the nutritional resources provided in an impoundment are to be estimated. The following technique for estimating seed production takes into account the variation resulting from different environmental conditions and management practices as well as differences in the amount of seed produced by various plant species. The technique was developed to provide resource managers with the ability to make quick and reliable estimates of seed production. Although on-site information must be collected, the amount of field time required is small (i.e., about 1 min per sample); sampling normally is accomplished on an area within a few days. Estimates of seed production derived with this technique are used, in combination with other available information, to determine the potential number of waterfowl use-days available and to evaluate the effects of various management strategies on a particular site.
Koopmans, M.P.; Rijpstra, W.I.C.; Klapwijk, M.M.; De Leeuw, J. W.; Lewan, M.D.; Sinninghe, Damste J.S.
1999-01-01
A thermal and chemical degradation approach was followed to determine the precursors of pristane (Pr) and phytane (Ph) in samples from the Gessoso-solfifera, Ghareb and Green River Formations. Hydrous pyrolysis of these samples yields large amounts of Pr and Ph carbon skeletons, indicating that their precursors are predominantly sequestered in high-molecular-weight fractions. However, chemical degradation of the polar fraction and the kerogen of the unheated samples generally does not release large amounts of Pr and Ph. Additional information on the precursors of Pr and Ph is obtained from flash pyrolysis analyses of kerogens and residues after hydrous pyrolysis and after chemical degradation. Multiple precursors for Pr and Ph are recognised in these three samples. The main increase of the Pr/Ph ratio with increasing maturation temperature, which is associated with strongly increasing amounts of Pr and Ph, is probably due to the higher amount of precursors of Pr compared to Ph, and not to the different timing of generation of Pr and Ph.A thermal and chemical degradation approach was followed to determine the precursors of pristane (Pr) and phytane (Ph) in samples from the Gessoso-solfifera, Ghareb and Green River Formations. Hydrous pyrolysis of these samples yields large amounts of Pr and Ph carbon skeletons, indicating that their precursors are predominantly sequestered in high-molecular-weight fractions. However, chemical degradation of the polar fraction and the kerogen of the unheated samples generally does not release large amounts of Pr and Ph. Additional information on the precursors of Pr and Ph is obtained from flash pyrolysis analyses of kerogens and residues after hydrous pyrolysis and after chemical degradation. Multiple precursors for Pr and Ph are recognised in these three samples. The main increase of the Pr/Ph ratio with increasing maturation temperature, which is associated with strongly increasing amounts of Pr and Ph, is probably due to the higher amount of precursors of Pr compared to Ph, and not to the different timing of generation of Pr and Ph.
Scadding, Cameron J; Watling, R John; Thomas, Allen G
2005-08-15
The majority of crimes result in the generation of some form of physical evidence, which is available for collection by crime scene investigators or police. However, this debris is often limited in amount as modern criminals become more aware of its potential value to forensic scientists. The requirement to obtain robust evidence from increasingly smaller sized samples has required refinement and modification of old analytical techniques and the development of new ones. This paper describes a new method for the analysis of oxy-acetylene debris, left behind at a crime scene, and the establishment of its co-provenance with single particles of equivalent debris found on the clothing of persons of interest (POI). The ability to rapidly determine and match the elemental distribution patterns of debris collected from crime scenes to those recovered from persons of interest is essential in ensuring successful prosecution. Traditionally, relatively large amounts of sample (up to several milligrams) have been required to obtain a reliable elemental fingerprint of this type of material [R.J. Walting , B.F. Lynch, D. Herring, J. Anal. At. Spectrom. 12 (1997) 195]. However, this quantity of material is unlikely to be recovered from a POI. This paper describes the development and application of laser ablation inductively coupled plasma time of flight mass spectrometry (LA-ICP-TOF-MS), as an analytical protocol, which can be applied more appropriately to the analysis of micro-debris than conventional quadrupole based mass spectrometry. The resulting data, for debris as small as 70mum in diameter, was unambiguously matched between a single spherule recovered from a POI and a spherule recovered from the scene of crime, in an analytical procedure taking less than 5min.
A study of the additional costs of dispensing workers' compensation prescriptions.
Schafermeyer, Kenneth W
2007-03-01
Although there is a significant amount of additional work involved in dispensing workers' compensation prescriptions, these costs have not been quantified. A study of the additional costs to dispense a workers' compensation prescription is needed to measure actual costs and to help determine the reasonableness of reimbursement for prescriptions dispensed under workers' compensation programs. The purpose of this study was to determine the minimum additional time and costs required to dispense workers' compensation prescriptions in Texas. A convenience sample of 30 store-level pharmacy staff members involved in submitting and processing prescription claims for the Texas Mutual workers' compensation program were interviewed by telephone. Data collected to determine the additional costs of dispensing a workers' compensation prescription included (1) the amount of additional time and personnel costs required to dispense and process an average workers' compensation prescription claim, (2) the difference in time required for a new versus a refilled prescription, (3) overhead costs for processing workers' compensation prescription claims by experienced experts at a central processing facility, (4) carrying costs for workers' compensation accounts receivable, and (5) bad debts due to uncollectible workers' compensation claims. The median of the sample pharmacies' additional costs for dispensing a workers' compensation prescription was estimated to be at least $9.86 greater than for a cash prescription. This study shows that the estimated costs for workers' compensation prescriptions were significantly higher than for cash prescriptions. These costs are probably much more than most employers, workers' compensation payers, and pharmacy managers would expect. It is recommended that pharmacy managers should estimate their own costs and compare these costs to actual reimbursement when considering the reasonableness of workers' compensation prescriptions and whether to accept these prescriptions.
NASA Astrophysics Data System (ADS)
ten Veldhuis, Marie-Claire; Schleiss, Marc
2017-04-01
In this study, we introduced an alternative approach for analysis of hydrological flow time series, using an adaptive sampling framework based on inter-amount times (IATs). The main difference with conventional flow time series is the rate at which low and high flows are sampled: the unit of analysis for IATs is a fixed flow amount, instead of a fixed time window. We analysed statistical distributions of flows and IATs across a wide range of sampling scales to investigate sensitivity of statistical properties such as quantiles, variance, skewness, scaling parameters and flashiness indicators to the sampling scale. We did this based on streamflow time series for 17 (semi)urbanised basins in North Carolina, US, ranging from 13 km2 to 238 km2 in size. Results showed that adaptive sampling of flow time series based on inter-amounts leads to a more balanced representation of low flow and peak flow values in the statistical distribution. While conventional sampling gives a lot of weight to low flows, as these are most ubiquitous in flow time series, IAT sampling gives relatively more weight to high flow values, when given flow amounts are accumulated in shorter time. As a consequence, IAT sampling gives more information about the tail of the distribution associated with high flows, while conventional sampling gives relatively more information about low flow periods. We will present results of statistical analyses across a range of subdaily to seasonal scales and will highlight some interesting insights that can be derived from IAT statistics with respect to basin flashiness and impact urbanisation on hydrological response.
Santas, Jonathan; Guzmán, Yeimmy J; Guardiola, Francesc; Rafecas, Magdalena; Bou, Ricard
2014-11-01
A fluorometric method for the determination of hydroperoxides (HP) in edible oils and fats using the reagent diphenyl-1-pyrenylphosphine (DPPP) was developed and validated. Two solvent media containing 100% butanol or a mixture of chloroform/methanol (2:1, v/v) can be used to solubilise lipid samples. Regardless of the solvent used to solubilise the sample, the DPPP method was precise, accurate, sensitive and easy to perform. The HP content of 43 oil and fat samples was determined and the results were compared with those obtained by means of the AOCS Official Method for the determination of peroxide value (PV) and the ferrous oxidation-xylenol orange (FOX) method. The proposed method not only correlates well with the PV and FOX methods, but also presents some advantages such as requiring low sample and solvent amounts and being suitable for high-throughput sample analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.
Quantization noise in digital speech. M.S. Thesis- Houston Univ.
NASA Technical Reports Server (NTRS)
Schmidt, O. L.
1972-01-01
The amount of quantization noise generated in a digital-to-analog converter is dependent on the number of bits or quantization levels used to digitize the analog signal in the analog-to-digital converter. The minimum number of quantization levels and the minimum sample rate were derived for a digital voice channel. A sample rate of 6000 samples per second and lowpass filters with a 3 db cutoff of 2400 Hz are required for 100 percent sentence intelligibility. Consonant sounds are the first speech components to be degraded by quantization noise. A compression amplifier can be used to increase the weighting of the consonant sound amplitudes in the analog-to-digital converter. An expansion network must be installed at the output of the digital-to-analog converter to restore the original weighting of the consonant sounds. This technique results in 100 percent sentence intelligibility for a sample rate of 5000 samples per second, eight quantization levels, and lowpass filters with a 3 db cutoff of 2000 Hz.
Anthrax Sampling and Decontamination: Technology Trade-Offs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price, Phillip N.; Hamachi, Kristina; McWilliams, Jennifer
2008-09-12
The goal of this project was to answer the following questions concerning response to a future anthrax release (or suspected release) in a building: 1. Based on past experience, what rules of thumb can be determined concerning: (a) the amount of sampling that may be needed to determine the extent of contamination within a given building; (b) what portions of a building should be sampled; (c) the cost per square foot to decontaminate a given type of building using a given method; (d) the time required to prepare for, and perform, decontamination; (e) the effectiveness of a given decontamination methodmore » in a given type of building? 2. Based on past experience, what resources will be spent on evaluating the extent of contamination, performing decontamination, and assessing the effectiveness of the decontamination in abuilding of a given type and size? 3. What are the trade-offs between cost, time, and effectiveness for the various sampling plans, sampling methods, and decontamination methods that have been used in the past?« less
Enhanced sampling techniques in biomolecular simulations.
Spiwok, Vojtech; Sucur, Zoran; Hosek, Petr
2015-11-01
Biomolecular simulations are routinely used in biochemistry and molecular biology research; however, they often fail to match expectations of their impact on pharmaceutical and biotech industry. This is caused by the fact that a vast amount of computer time is required to simulate short episodes from the life of biomolecules. Several approaches have been developed to overcome this obstacle, including application of massively parallel and special purpose computers or non-conventional hardware. Methodological approaches are represented by coarse-grained models and enhanced sampling techniques. These techniques can show how the studied system behaves in long time-scales on the basis of relatively short simulations. This review presents an overview of new simulation approaches, the theory behind enhanced sampling methods and success stories of their applications with a direct impact on biotechnology or drug design. Copyright © 2014 Elsevier Inc. All rights reserved.
Volcanic rocks and the geologic history of Mars
NASA Technical Reports Server (NTRS)
Salpas, P. A.
1988-01-01
A number of questions exist regarding the geology of Mars which can be addressed by the proposed Mars rover-sample return mission. The use of a rover during the proposed mission greatly enhances the ability to investigate multiple aspects of Martian geology and geological history. Attempting to address all of the important questions may dilute the amount of information that can be obtained regarding each question and may result in no satisfactory answers. Prioritization is essential to a successful mission. The task of setting priorities is simplified somewhat when it is considered that answers to some of these questions do not require taking samples, and that for some questions, sample location is not as important as for others. The surface of Mars presents two distinct terrains, both of which have the potential to contain valuable information regarding the composition of Mars.
45 CFR 150.323 - Determining the amount of penalty-other matters as justice may require.
Code of Federal Regulations, 2010 CFR
2010-10-01
... REQUIREMENTS RELATING TO HEALTH CARE ACCESS CMS ENFORCEMENT IN GROUP AND INDIVIDUAL INSURANCE MARKETS CMS... Determining the amount of penalty—other matters as justice may require. CMS may take into account other...
Basic quantitative polymerase chain reaction using real-time fluorescence measurements.
Ares, Manuel
2014-10-01
This protocol uses quantitative polymerase chain reaction (qPCR) to measure the number of DNA molecules containing a specific contiguous sequence in a sample of interest (e.g., genomic DNA or cDNA generated by reverse transcription). The sample is subjected to fluorescence-based PCR amplification and, theoretically, during each cycle, two new duplex DNA molecules are produced for each duplex DNA molecule present in the sample. The progress of the reaction during PCR is evaluated by measuring the fluorescence of dsDNA-dye complexes in real time. In the early cycles, DNA duplication is not detected because inadequate amounts of DNA are made. At a certain threshold cycle, DNA-dye complexes double each cycle for 8-10 cycles, until the DNA concentration becomes so high and the primer concentration so low that the reassociation of the product strands blocks efficient synthesis of new DNA and the reaction plateaus. There are two types of measurements: (1) the relative change of the target sequence compared to a reference sequence and (2) the determination of molecule number in the starting sample. The first requires a reference sequence, and the second requires a sample of the target sequence with known numbers of the molecules of sequence to generate a standard curve. By identifying the threshold cycle at which a sample first begins to accumulate DNA-dye complexes exponentially, an estimation of the numbers of starting molecules in the sample can be extrapolated. © 2014 Cold Spring Harbor Laboratory Press.
Characterization and structural properties of iron in plants.
NASA Astrophysics Data System (ADS)
Dewanamuni, Udya; Dehipawala, Sunil; Gafney, Harry
Iron is one of the most abundant metals in the soil and occurs in a wide range of chemical forms. Humans receive iron through either meat products or plants. Non meat eaters depend on plant product for their daily iron requirement. The iron absorption by plants depends on other minerals present in the soil and soil pH value. The amount of iron present in plants grown with different soil compositions were investigated using X-ray absorption spectroscopy (XAS) and Mossbauer spectroscopy. Based on the X-ray absorption data, the amount of iron in plants vary significantly with soil pH value. The Mossbauer spectroscopy reveals that iron present in the samples has the form Fe3+ or electron density at the site of the iron nucleus similar to that of Fe3+. CUNY Research Scholar Program, MSEIP.
Final report of the key comparison CCQM-K98: Pb isotope amount ratios in bronze
NASA Astrophysics Data System (ADS)
Vogl, Jochen; Yim, Yong-Hyeon; Lee, Kyoung-Seok; Goenaga-Infante, Heidi; Malinowskiy, Dmitriy; Ren, Tongxiang; Wang, Jun; Vocke, Robert D., Jr.; Murphy, Karen; Nonose, Naoko; Rienitz, Olaf; Noordmann, Janine; Näykki, Teemu; Sara-Aho, Timo; Ari, Betül; Cankur, Oktay
2014-01-01
Isotope amount ratios are proving useful in an ever increasing array of applications that range from studies unravelling transport processes, to pinpointing the provenance of specific samples as well as trace element quantification by using isotope dilution mass spectrometry (IDMS). These expanding applications encompass fields as diverse as archaeology, food chemistry, forensic science, geochemistry, medicine and metrology. However, to be effective tools, the isotope ratio data must be reliable and traceable to enable the comparability of measurement results. The importance of traceability and comparability in isotope ratio analysis has already been recognized by the Inorganic Analysis Working Group (IAWG) within the CCQM. While the requirements for isotope ratio accuracy and precision in the case of IDMS are generally quite modest, 'absolute' Pb isotope ratio measurements for geochemical applications as well as forensic provenance studies require Pb isotope ratio measurements of the highest quality. To support present and future CMCs on isotope ratio determinations, a key comparison was urgently needed and therefore initiated at the IAWG meeting in Paris in April 2011. The analytical task within such a comparison was decided to be the measurement of Pb isotope amount ratios in water and bronze. Measuring Pb isotope amount ratios in an aqueous Pb solution tested the ability of analysts to correct for any instrumental effects on the measured ratios, while the measurement of Pb isotope amount ratios in a metal matrix sample provided a real world test of the whole chemical and instrumental procedure. A suitable bronze material with a Pb mass fraction between 10 and 100 mg•kg-1 and a high purity solution of Pb with a mass fraction of approximately 100 mg•kg-1 was available at the pilot laboratory (BAM), both offering a natural-like Pb isotopic composition. The mandatory measurands, the isotope amount ratios n(206Pb)/n(204Pb), n(207Pb)/n(204Pb) and n(208Pb)/n(204Pb) were selected such that they correspond with those commonly reported in Pb isotopic studies and fully describe the isotopic composition of Pb in the sample. Additionally, the isotope amount ratio n(208Pb)/n(206Pb) was added, as this isotope ratio is typically measured when performing Pb quantitation by IDMS involving a 206Pb spike. Each participant was free to use any method they deemed suitable for measuring the individual isotope ratios. However, the majority of the results were obtained by using muIti-collector ICPMS or TIMS. The key requirements for all analytical procedures were a traceability statement for all results and the establishment of an uncertainty budget meeting a target uncertainty for all ratios of 0.2 %, relative (k=1). Additionally, the use of a Pb-matrix separation procedure was encouraged. The obtained overall result was excellent, demonstrating that the individual results reported by the NMIs/DIs were comparable and compatible for the determination of Pb isotope ratios. MC-ICPMS and MC-TIMS data were consistent with each other and agree to within 0.05 %. The corresponding uncertainties can be considered as realistic uncertainties and mainly range from 0.02 % to 0.08 % (k=1). As stated above isotope ratios are being increasingly used in different fields. Despite the availability and ease of use of new mass spectrometers, the metrology of unbiased isotope ratio measurements remains very challenging. Therefore, further comparisons are urgently needed, and should be designed to also engage scientists outside the NMI/DI community. Possible follow-up studies should focus on isotope ratio and delta measurements important for environmental and technical applications (e.g. B), food traceability and forensics (e.g. H, C, N, O, S and 87Sr/86Sr) or climate change issues (e.g. Li, B, Mg, Ca, Si). Main text. To reach the main text of this paper, click on Final Report. The final report has been peer-reviewed and approved for publication by the CCQM.
Measuring larval nematode contamination on cattle pastures: Comparing two herbage sampling methods.
Verschave, S H; Levecke, B; Duchateau, L; Vercruysse, J; Charlier, J
2015-06-15
Assessing levels of pasture larval contamination is frequently used to study the population dynamics of the free-living stages of parasitic nematodes of livestock. Direct quantification of infective larvae (L3) on herbage is the most applied method to measure pasture larval contamination. However, herbage collection remains labour intensive and there is a lack of studies addressing the variation induced by the sampling method and the required sample size. The aim of this study was (1) to compare two different sampling methods in terms of pasture larval count results and time required to sample, (2) to assess the amount of variation in larval counts at the level of sample plot, pasture and season, respectively and (3) to calculate the required sample size to assess pasture larval contamination with a predefined precision using random plots across pasture. Eight young stock pastures of different commercial dairy herds were sampled in three consecutive seasons during the grazing season (spring, summer and autumn). On each pasture, herbage samples were collected through both a double-crossed W-transect with samples taken every 10 steps (method 1) and four random located plots of 0.16 m(2) with collection of all herbage within the plot (method 2). The average (± standard deviation (SD)) pasture larval contamination using sampling methods 1 and 2 was 325 (± 479) and 305 (± 444)L3/kg dry herbage (DH), respectively. Large discrepancies in pasture larval counts of the same pasture and season were often seen between methods, but no significant difference (P = 0.38) in larval counts between methods was found. Less time was required to collect samples with method 2. This difference in collection time between methods was most pronounced for pastures with a surface area larger than 1 ha. The variation in pasture larval counts from samples generated by random plot sampling was mainly due to the repeated measurements on the same pasture in the same season (residual variance component = 6.2), rather than due to pasture (variance component = 0.55) or season (variance component = 0.15). Using the observed distribution of L3, the required sample size (i.e. number of plots per pasture) for sampling a pasture through random plots with a particular precision was simulated. A higher relative precision was acquired when estimating PLC on pastures with a high larval contamination and a low level of aggregation compared to pastures with a low larval contamination when the same sample size was applied. In the future, herbage sampling through random plots across pasture (method 2) seems a promising method to develop further as no significant difference in counts between the methods was found and this method was less time consuming. Copyright © 2015 Elsevier B.V. All rights reserved.
Kay, Richard G; Challis, Benjamin G; Casey, Ruth T; Roberts, Geoffrey P; Meek, Claire L; Reimann, Frank; Gribble, Fiona M
2018-06-01
Diagnosis of pancreatic neuroendocrine tumours requires the study of patient plasma with multiple immunoassays, using multiple aliquots of plasma. The application of mass spectrometry based techniques could reduce the cost and amount of plasma required for diagnosis. Plasma samples from two patients with pancreatic neuroendocrine tumours were extracted using an established acetonitrile based plasma peptide enrichment strategy. The circulating peptidome was characterised using nano and high flow rate LC/MS analyses. To assess the diagnostic potential of the analytical approach, a large sample batch (68 plasmas) from control subjects, and aliquots from subjects harbouring two different types of pancreatic neuroendocrine tumour (insulinoma and glucagonoma) were analysed using a 10-minute LC/MS peptide screen. The untargeted plasma peptidomics approach identified peptides derived from the glucagon prohormone, chromogranin A, chromogranin B and other peptide hormones and proteins related to control of peptide secretion. The glucagon prohormone derived peptides that were detected were compared against putative peptides that were identified using multiple antibody pairs against glucagon peptides. Comparison of the plasma samples for relative levels of selected peptides showed clear separation between the glucagonoma and the insulinoma and control samples. The combination of the organic solvent extraction methodology with high flow rate analysis could potentially be used to aid diagnosis and monitor treatment of patients with functioning pancreatic neuroendocrine tumours. However, significant validation will be required before this approach can be clinically applied. This article is protected by copyright. All rights reserved.
Ohno, Hiroyuki; Suzuki, Masako; Kawamura, Yoko
2011-01-01
The amount of evaporation residue was investigated as an index of total amount of non-volatile substances that migrated from plastic kitchen utensils into four food-simulating solvents (water, 4% acetic acid, 20% ethanol and heptane). The samples were 71 products made of 12 types of plastics for food contact use. The amount was determined in accordance with the Japanese testing method. The quantitation limit was 5 µg/mL. In the cases of polyethylene, polypropylene, polystyrene, acrylonitrile styrene resin, acrylonitrile butadiene styrene resin, polyvinyl chloride, polyvinylidene chloride, polymethylpentene, polymethylmethacrylate and polyethylene terephthalate samples, the amount was highest for heptane and very low for the other solvents. On the other hand, in the cases of melamine resin and polyamide samples, the amount was highest for 4% acetic acid or 20% ethanol and lowest for heptane. These results enabled the selection of the most suitable solvent, and the rapid and efficient determination of evaporation residue.
Leventhal, J.S.; Hosterman, J.W.
1982-01-01
Core samples of Devonian shales from five localities in the Appalachian basin have been analyzed chemically and mineralogically. The amounts of major elements are similar; however, the minor constituents, organic C, S, phosphate and carbonate show ten-fold variations in amounts. Trace elements Mo, Ni, Cu, V, Co, U, Zn, Hg, As and Mn show variations in amounts that can be related to the minor constituents. All samples contain major amounts of quartz, illite, two types of mixed-layer clays, and chlorite in differing quantities. Pyrite, calcite, feldspar and kaolinite are also present in many samples in minor amounts. Dolomite, apatite, gypsum, barite, biotite and marcasite are present in a few samples in trace amounts. Trace elements listed above are strongly controlled by organic C with the exception of Mn which is associated with carbonate minerals. Amounts of organic C generally range from 3 to 6%, and S is in the range of 2-5%. Amounts of trace elements show the following general ranges in ppm (parts per million): Co, 20-40; Cu, 40-70; U, 10-40; As, 20-40; V, 150-300; Ni, 80-150; high values are as much as twice these values. The organic C was probably the concentrating agent, and the organic C and sulfide S together created an environment that immobilized and preserved these trace elements. Closely spaced samples showing an abrupt transition in color also show changes in organic C, S and trace-element contents. Several associations exist between mineral and chemical content. Pyrite and marcasite are the only minerals found to contain sulfide-S. In general, the illite-chlorite mixed-layer clay mineral shows covariation with organic C if calcite is not present. The enriched trace elements are not related to the clay types, although the clay and organic matter are intimately associated as the bulk fabric of the rock. ?? 1982.
NASA Astrophysics Data System (ADS)
Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen
2017-03-01
Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.
10 CFR 140.12 - Amount of financial protection required for other reactors.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 2 2011-01-01 2011-01-01 false Amount of financial protection required for other reactors... reactors. (a) Each licensee is required to have and maintain financial protection for each nuclear reactor... of financial protection required for any nuclear reactor under this section be less than $4,500,000...
10 CFR 140.12 - Amount of financial protection required for other reactors.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 2 2010-01-01 2010-01-01 false Amount of financial protection required for other reactors... reactors. (a) Each licensee is required to have and maintain financial protection for each nuclear reactor... of financial protection required for any nuclear reactor under this section be less than $4,500,000...
Clemons, Kristina; Dake, Jeffrey; Sisco, Edward; Verbeck, Guido F
2013-09-10
Direct analysis in real time mass spectrometry (DART-MS) has proven to be a useful forensic tool for the trace analysis of energetic materials. While other techniques for detecting trace amounts of explosives involve extraction, derivatization, solvent exchange, or sample clean-up, DART-MS requires none of these. Typical DART-MS analyses directly from a solid sample or from a swab have been quite successful; however, these methods may not always be an optimal sampling technique in a forensic setting. For example, if the sample were only located in an area which included a latent fingerprint of interest, direct DART-MS analysis or the use of a swab would almost certainly destroy the print. To avoid ruining such potentially invaluable evidence, another method has been developed which will leave the fingerprint virtually untouched. Direct analyte-probed nanoextraction coupled to nanospray ionization-mass spectrometry (DAPNe-NSI-MS) has demonstrated excellent sensitivity and repeatability in forensic analyses of trace amounts of illicit drugs from various types of surfaces. This technique employs a nanomanipulator in conjunction with bright-field microscopy to extract single particles from a surface of interest and has provided a limit of detection of 300 attograms for caffeine. Combining DAPNe with DART-MS provides another level of flexibility in forensic analysis, and has proven to be a sufficient detection method for trinitrotoluene (TNT), RDX, and 1-methylaminoanthraquinone (MAAQ). Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Schluender, Irene; Smee, Carol; Suhr, Stephanie
2015-01-01
Availability of and access to data and biosamples are essential in medical and translational research, where their reuse and repurposing by the wider research community can maximize their value and accelerate discovery. However, sharing human-related data or samples is complicated by ethical, legal, and social sensitivities. The specific ethical and legal requirements linked to sensitive data are often unfamiliar to life science researchers who, faced with vast amounts of complex, fragmented, and sometimes even contradictory information, may not feel competent to navigate through it. In this case, the impulse may be not to share the data in order to safeguard against unintentional misuse. Consequently, helping data providers to identify relevant ethical and legal requirements and how they might address them is an essential and frequently neglected step in removing possible hurdles to data and sample sharing in the life sciences. Here, we describe the complex regulatory context and discuss relevant online tools—one which the authors co-developed—targeted at assisting providers of sensitive data or biosamples with ethical and legal questions. The main results are (1) that the different approaches of the tools assume different user needs and prior knowledge of ethical and legal requirements, affecting how a service is designed and its usefulness, (2) that there is much potential for collaboration between tool providers, and (3) that enriched annotations of services (e.g., update status, completeness of information, and disclaimers) would increase their value and facilitate quick assessment by users. Further, there is still work to do with respect to providing researchers using sensitive data or samples with truly ‘useful’ tools that do not require pre-existing, in-depth knowledge of legal and ethical requirements or time to delve into the details. Ultimately, separate resources, maintained by experts familiar with the respective fields of research, may be needed while—in the longer term—harmonization and increase in ease of use will be very desirable. PMID:26186169
NASA Astrophysics Data System (ADS)
He, Xiufen; Chen, Lixia; Chen, Xin; Yu, Huamei; Peng, Lixu; Han, Bingjun
2016-12-01
Toxic metals in rice pose great risks to human health. Metal bioaccumulation in rice grains is a criterion of breeding. Rice breeding requires a sensitive method to determine metal content in single rice grains to assist the variety selection. In the present study, four toxic metals of arsenic (As), cadmium (Cd), chromium (Cr) and lead (Pb) in a single rice grain were determined by a simple and rapid method. The developed method is based on matrix solid phase dispersion using multi-wall carbon nanotubes (MWCNTs) as dispersing agent and analyzed by inductively coupled plasma mass spectrometry. The experimental parameters were systematically investigated. The limits of detection (LOD) were 5.0, 0.6, 10 and 2.1 ng g-1 for As, Cd, Cr, and Pb, respectively, with relative standard deviations (n = 6) of <7.7%, demonstrating the good sensitivity and precision of the method. The results of 30 real world rice samples analyzed by this method agreed well with those obtained by the standard microwave digestion. The amount of sample required was reduced approximately 100 fold in comparison with the microwave digestion. The method has a high application potential for other sample matrices and elements with high sensitivity and sample throughput.
He, Xiufen; Chen, Lixia; Chen, Xin; Yu, Huamei; Peng, Lixu; Han, Bingjun
2016-12-06
Toxic metals in rice pose great risks to human health. Metal bioaccumulation in rice grains is a criterion of breeding. Rice breeding requires a sensitive method to determine metal content in single rice grains to assist the variety selection. In the present study, four toxic metals of arsenic (As), cadmium (Cd), chromium (Cr) and lead (Pb) in a single rice grain were determined by a simple and rapid method. The developed method is based on matrix solid phase dispersion using multi-wall carbon nanotubes (MWCNTs) as dispersing agent and analyzed by inductively coupled plasma mass spectrometry. The experimental parameters were systematically investigated. The limits of detection (LOD) were 5.0, 0.6, 10 and 2.1 ng g -1 for As, Cd, Cr, and Pb, respectively, with relative standard deviations (n = 6) of <7.7%, demonstrating the good sensitivity and precision of the method. The results of 30 real world rice samples analyzed by this method agreed well with those obtained by the standard microwave digestion. The amount of sample required was reduced approximately 100 fold in comparison with the microwave digestion. The method has a high application potential for other sample matrices and elements with high sensitivity and sample throughput.
Microarray-based comparison of three amplification methods for nanogram amounts of total RNA
Singh, Ruchira; Maganti, Rajanikanth J.; Jabba, Sairam V.; Wang, Martin; Deng, Glenn; Heath, Joe Don; Kurn, Nurith; Wangemann, Philine
2007-01-01
Gene expression profiling using microarrays requires microgram amounts of RNA, which limits its direct application for the study of nanogram RNA samples obtained using microdissection, laser capture microscopy, or needle biopsy. A novel system based on Ribo-SPIA technology (RS, Ovation-Biotin amplification and labeling system) was recently introduced. The utility of the RS system, an optimized prototype system for picogram RNA samples (pRS), and two T7-based systems involving one or two rounds of amplification (OneRA, Standard Protocol, or TwoRA, Small Sample Prototcol, version II) were evaluated in the present study. Mouse kidney (MK) and mouse universal reference (MUR) RNA samples, 0.3 ng to 10 μg, were analyzed using high-density Affymetrix Mouse Genome 430 2.0 GeneChip arrays. Call concordance between replicates, correlations of signal intensity, signal intensity ratios, and minimal fold increase necessary for significance were determined. All systems amplified partially overlapping sets of genes with similar signal intensity correlations. pRS amplified the highest number of genes from 10-ng RNA samples. We detected 24 of 26 genes verified by RT-PCR in samples prepared using pRS. TwoRA yielded somewhat higher call concordances than did RS and pRS (91.8% vs. 89.3% and 88.1%, respectively). Although all target preparation methods were suitable, pRS amplified the highest number of targets and was found to be suitable for amplification of as little as 0.3 ng of total RNA. In addition, RS and pRS were faster and simpler to use than the T7-based methods and resulted in the generation of cDNA, which is more stable than cRNA. PMID:15613496
Machine for Automatic Bacteriological Pour Plate Preparation
Sharpe, A. N.; Biggs, D. R.; Oliver, R. J.
1972-01-01
A fully automatic system for preparing poured plates for bacteriological analyses has been constructed and tested. The machine can make decimal dilutions of bacterial suspensions, dispense measured amounts into petri dishes, add molten agar, mix the dish contents, and label the dishes with sample and dilution numbers at the rate of 2,000 dishes per 8-hr day. In addition, the machine can be programmed to select different media so that plates for different types of bacteriological analysis may be made automatically from the same sample. The machine uses only the components of the media and sterile polystyrene petri dishes; requirements for all other materials, such as sterile pipettes and capped bottles of diluents and agar, are eliminated. Images PMID:4560475
Samejima, Keijiro; Otani, Masahiro; Murakami, Yasuko; Oka, Takami; Kasai, Misao; Tsumoto, Hiroki; Kohda, Kohfuku
2007-10-01
A sensitive method for the determination of polyamines in mammalian cells was described using electrospray ionization and time-of-flight mass spectrometer. This method was 50-fold more sensitive than the previous method using ionspray ionization and quadrupole mass spectrometer. The method employed the partial purification and derivatization of polyamines, but allowed a measurement of multiple samples which contained picomol amounts of polyamines. Time required for data acquisition of one sample was approximately 2 min. The method was successfully applied for the determination of reduced spermidine and spermine contents in cultured cells under the inhibition of aminopropyltransferases. In addition, a new proper internal standard was proposed for the tracer experiment using (15)N-labeled polyamines.
NASA Astrophysics Data System (ADS)
Tauanov, Z.; Abylgazina, L.; Spitas, C.; Itskos, G.; Inglezakis, V.
2017-09-01
Coal fly ash (CFA) is a waste by-product of coal combustion. Kazakhstan has vast coal deposits and is major consumer of coal and hence produces huge amounts of CFA annually. The government aims to recycle and effectively utilize this waste by-product. Thus, a detailed study of the physical and chemical properties of material is required as the data available in literature is either outdated or not applicable for recently produced CFA samples. The full mineralogical, microstructural and thermal characterization of three types of coal fly ash (CFA) produced in two large Kazakhstani power plants is reported in this work. The properties of CFAs were compared between samples as well as with published values.
Holopainen, R; Tuomainen, M; Asikainen, V; Pasanen, P; Säteri, J; Seppänen, O
2002-09-01
The aim of this study was to evaluate the amount of dust in supply air ducts in recently installed ventilation systems. The samples for the determination of dust accumulation were collected from supply air ducts in 18 new buildings that have been constructed according to two different cleanliness control levels classified as category P1 (low oil residues and protected against contaminations) and category P2, as defined in the Classification of Indoor Climate, Construction and Building Materials. In the ducts installed according to the requirements of cleanliness category P1 the mean amount of accumulated dust was 0.9 g/m2 (0.4-2.9 g/m2), and in the ducts installed according to the cleanliness category P2 it was 2.3 g/m2 (1.2-4.9 g/m2). A significant difference was found in the mean amounts of dust between ducts of categories P1 and P2 (P < 0.008). The cleanliness control procedure in category P1 proved to be a useful and effective tool for preventing dust accumulation in new air ducts during the construction process. Additionally, the ducts without residual oil had lower amounts of accumulated dust indicating that the demand for oil free components in the cleanliness classification is reasonable.
Dynamic measurements of CO diffusing capacity using discrete samples of alveolar gas.
Graham, B L; Mink, J T; Cotton, D J
1983-01-01
It has been shown that measurements of the diffusing capacity of the lung for CO made during a slow exhalation [DLCO(exhaled)] yield information about the distribution of the diffusing capacity in the lung that is not available from the commonly measured single-breath diffusing capacity [DLCO(SB)]. Current techniques of measuring DLCO(exhaled) require the use of a rapid-responding (less than 240 ms, 10-90%) CO meter to measure the CO concentration in the exhaled gas continuously during exhalation. DLCO(exhaled) is then calculated using two sample points in the CO signal. Because DLCO(exhaled) calculations are highly affected by small amounts of noise in the CO signal, filtering techniques have been used to reduce noise. However, these techniques reduce the response time of the system and may introduce other errors into the signal. We have developed an alternate technique in which DLCO(exhaled) can be calculated using the concentration of CO in large discrete samples of the exhaled gas, thus eliminating the requirement of a rapid response time in the CO analyzer. We show theoretically that this method is as accurate as other DLCO(exhaled) methods but is less affected by noise. These findings are verified in comparisons of the discrete-sample method of calculating DLCO(exhaled) to point-sample methods in normal subjects, patients with emphysema, and patients with asthma.
Carey, Roger Neill; Jani, Chinu; Johnson, Curtis; Pearce, Jim; Hui-Ng, Patricia; Lacson, Eduardo
2016-09-07
Plasma samples collected in tubes containing separator gels have replaced serum samples for most chemistry tests in many hospital and commercial laboratories. Use of plasma samples for blood tests in the dialysis population eliminates delays in sample processing while waiting for clotting to complete, laboratory technical issues associated with fibrin formation, repeat sample collection, and patient care issues caused by delay of results because of incompletely clotted specimens. Additionally, a larger volume of plasma is produced than serum for the same amount of blood collected. Plasma samples are also acceptable for most chemical tests involved in the care of patients with ESRD. This information becomes very important when United States regulatory requirements for ESRD inadvertently limit the type of sample that can be used for government reporting, quality assessment, and value-based payment initiatives. In this narrative, we summarize the renal community experience and how the subsequent resolution of the acceptability of phosphorus levels measured from serum and plasma samples may have significant implications in the country's continued development of a value-based Medicare ESRD Quality Incentive Program. Copyright © 2016 by the American Society of Nephrology.
Hinkley, T.K.; Seeley, J.L.; Tatsumoto, M.
1988-01-01
Three distinct types of solid material are associated with each sample of the hydrothermal fluid that was collected from the vents of the Southern Juan de Fuca Ridge. The solid materials appear to be representative of deposits on ocean floors near mid-ocean ridges, and interpretation of the chemistry of the hydrothermal solutions requires understanding of them. Sr isotopic evidence indicates that at least two and probably all three of these solid materials were removed from the solution with which they are associated, by precipitation or adsorption. This occurred after the "pure" hydrothermal fluid was diluted and thoroughly mixed with ambient seawater. The three types of solid materials, are, respectively, a coarse Zn- and Fe-rich material with small amounts of Na and Ca; a finer material also rich in Zn and Fe, but with alkali and alkaline-earth metals; and a scum composed of Ba or Zn, with either considerable Fe or Si, and Sr. Mineral identification is uncertain because of uncertain anion composition. Only in the cases of Ba and Zn were metal masses greater in solid materials than in the associated fluids. For all other metals measured, masses in fluids dwarf those in solids. The fluids themselves contain greater concentrations of all metals measured, except Mg, than seawater. We discuss in detail the relative merits of two methods of determining the mixing proportions of "pure" hydrothermal solution and seawater in the fluids, one based on Sr isotopes, and another previously used method based on Mg concentrations. Comparison of solute concentrations in the several samples shows that degree of dilution of "pure" hydrothermal solutions by seawater, and amounts of original solutes that were removed from it as solid materials, are not related. There is no clear evidence that appreciable amounts of solid materials were not conserved (lost) either during or prior to sample collection. ?? 1988.
Deng, Ting; Wu, Dapeng; Duan, Chunfeng; Guan, Yafeng
2016-07-22
Determination of endogenous brassinosteroids (BRs) in limited sample amount is vital to elucidating their tissue- and even local tissue-specific signaling pathway and physiological effects on plant growth and development. In this work, an ultra-sensitive quantification method was established for endogenous BRs in milligram fresh plant by using pipette-tip solid-phase extraction coupled with ultra-performance liquid chromatography tandem mass spectrometry (PT-SPE-UPLC-MS/MS), in which a quaternary ammonium phenyl boronic acid, 4-borono-N,N,N-trimethylbenzenaminium iodide (BTBA) was first developed for chemical derivatization of BRs. Due to the cationic quaternary ammonium group of BTBA, the ionization efficiencies of the BRs chelates with BTBA (BTBA-BRs) were enhanced by 1190-448785 times, which is the highest response enhancement factor among all derivatization reagents reported for BRs. In addition, PT-SPE packed with C18 sorbent was first used for purifying BRs from plant extracts, so the required sample amount was minimized, and recoveries higher than 91% were achieved. Under the optimized conditions, the minimal detectable amounts (MDA) of five target BRs were in the range of 27-94 amol, and the correlation coefficients (R(2)) were >0.9985 over four orders of magnitude. The relative recoveries of 75.8-104.9% were obtained with the intra- and inter-day relative standard deviations (RSDs) less than 18.7% and 19.6%, respectively. Finally, three BRs were successfully quantified in only 5mg fresh rice plant samples, and 24-epiBL can even be detected in only 0.5mg FW rice leaf segments. It is the first time that the BRs content in sub-milligram fresh plant sample has been quantified. Copyright © 2016 Elsevier B.V. All rights reserved.
Wu, Yiman; Li, Liang
2012-12-18
For mass spectrometry (MS)-based metabolomics, it is important to use the same amount of starting materials from each sample to compare the metabolome changes in two or more comparative samples. Unfortunately, for biological samples, the total amount or concentration of metabolites is difficult to determine. In this work, we report a general approach of determining the total concentration of metabolites based on the use of chemical labeling to attach a UV absorbent to the metabolites to be analyzed, followed by rapid step-gradient liquid chromatography (LC) UV detection of the labeled metabolites. It is shown that quantification of the total labeled analytes in a biological sample facilitates the preparation of an appropriate amount of starting materials for MS analysis as well as the optimization of the sample loading amount to a mass spectrometer for achieving optimal detectability. As an example, dansylation chemistry was used to label the amine- and phenol-containing metabolites in human urine samples. LC-UV quantification of the labeled metabolites could be optimally performed at the detection wavelength of 338 nm. A calibration curve established from the analysis of a mixture of 17 labeled amino acid standards was found to have the same slope as that from the analysis of the labeled urinary metabolites, suggesting that the labeled amino acid standard calibration curve could be used to determine the total concentration of the labeled urinary metabolites. A workflow incorporating this LC-UV metabolite quantification strategy was then developed in which all individual urine samples were first labeled with (12)C-dansylation and the concentration of each sample was determined by LC-UV. The volumes of urine samples taken for producing the pooled urine standard were adjusted to ensure an equal amount of labeled urine metabolites from each sample was used for the pooling. The pooled urine standard was then labeled with (13)C-dansylation. Equal amounts of the (12)C-labeled individual sample and the (13)C-labeled pooled urine standard were mixed for LC-MS analysis. This way of concentration normalization among different samples with varying concentrations of total metabolites was found to be critical for generating reliable metabolome profiles for comparison.
The MPLEx Protocol for Multi-omic Analyses of Soil Samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicora, Carrie D.; Burnum-Johnson, Kristin E.; Nakayasu, Ernesto S.
Mass spectrometry (MS)-based integrated metaproteomic, metabolomic and lipidomic (multi-omic) studies are transforming our ability to understand and characterize microbial communities in environmental and biological systems. These measurements are even enabling enhanced analyses of complex soil microbial communities, which are the most complex microbial systems known to date. Multi-omic analyses, however, do have sample preparation challenges since separate extractions are typically needed for each omic study, thereby greatly amplifying the preparation time and amount of sample required. To address this limitation, a 3-in-1 method for simultaneous metabolite, protein, and lipid extraction (MPLEx) from the exact same soil sample was created bymore » adapting a solvent-based approach. This MPLEx protocol has proven to be simple yet robust for many sample types and even when utilized for limited quantities of complex soil samples. The MPLEx method also greatly enabled the rapid multi-omic measurements needed to gain a better understanding of the members of each microbial community, while evaluating the changes taking place upon biological and environmental perturbations.« less
Calcium kinetics with microgram stable isotope doses and saliva sampling
NASA Technical Reports Server (NTRS)
Smith, S. M.; Wastney, M. E.; Nyquist, L. E.; Shih, C. Y.; Wiesmann, H.; Nillen, J. L.; Lane, H. W.
1996-01-01
Studies of calcium kinetics require administration of tracer doses of calcium and subsequent repeated sampling of biological fluids. This study was designed to develop techniques that would allow estimation of calcium kinetics by using small (micrograms) doses of isotopes instead of the more common large (mg) doses to minimize tracer perturbation of the system and reduce cost, and to explore the use of saliva sampling as an alternative to blood sampling. Subjects received an oral dose (133 micrograms) of 43Ca and an i.v. dose (7.7 micrograms) of 46Ca. Isotopic enrichment in blood, urine, saliva and feces was well above thermal ionization mass spectrometry measurement precision up to 170 h after dosing. Fractional calcium absorptions determined from isotopic ratios in blood, urine and saliva were similar. Compartmental modeling revealed that kinetic parameters determined from serum or saliva data were similar, decreasing the necessity for blood samples. It is concluded from these results that calcium kinetics can be assessed with micrograms doses of stable isotopes, thereby reducing tracer costs and with saliva samples, thereby reducing the amount of blood needed.
Recent Advances in Water Analysis with Gas Chromatograph Mass Spectrometers
NASA Technical Reports Server (NTRS)
MacAskill, John A.; Tsikata, Edem
2014-01-01
We report on progress made in developing a water sampling system for detection and analysis of volatile organic compounds in water with a gas chromatograph mass spectrometer (GCMS). Two approaches are described herein. The first approach uses a custom water pre-concentrator for performing trap and purge of VOCs from water. The second approach uses a custom micro-volume, split-splitless injector that is compatible with air and water. These water sampling systems will enable a single GC-based instrument to analyze air and water samples for VOC content. As reduced mass, volume, and power is crucial for long-duration, manned space-exploration, these water sampling systems will demonstrate the ability of a GCMS to monitor both air and water quality of the astronaut environment, thereby reducing the amount of required instrumentation for long duration habitation. Laboratory prototypes of these water sampling systems have been constructed and tested with a quadrupole ion trap mass spectrometer as well as a thermal conductivity detector. Presented herein are details of these water sampling system with preliminary test results.
DEVELOPMENT OF AN INSOLUBLE SALT SIMULANT TO SUPPORT ENHANCED CHEMICAL CLEANING TESTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eibling, R
The closure process for high level waste tanks at the Savannah River Site will require dissolution of the crystallized salts that are currently stored in many of the tanks. The insoluble residue from salt dissolution is planned to be removed by an Enhanced Chemical Cleaning (ECC) process. Development of a chemical cleaning process requires an insoluble salt simulant to support evaluation tests of different cleaning methods. The Process Science and Engineering section of SRNL has been asked to develop an insoluble salt simulant for use in testing potential ECC processes (HLE-TTR-2007-017). An insoluble salt simulant has been developed based uponmore » the residues from salt dissolution of saltcake core samples from Tank 28F. The simulant was developed for use in testing SRS waste tank chemical cleaning methods. Based on the results of the simulant development process, the following observations were developed: (1) A composition based on the presence of 10.35 grams oxalate and 4.68 grams carbonate per 100 grams solids produces a sufficiently insoluble solids simulant. (2) Aluminum observed in the solids remaining from actual waste salt dissolution tests is probably precipitated from sodium aluminate due to the low hydroxide content of the saltcake. (3) In-situ generation of aluminum hydroxide (by use of aluminate as the Al source) appears to trap additional salts in the simulant in a manner similar to that expected for actual waste samples. (4) Alternative compositions are possible with higher oxalate levels and lower carbonate levels. (5) The maximum oxalate level is limited by the required Na content of the insoluble solids. (6) Periodic mixing may help to limit crystal growth in this type of salt simulant. (7) Long term storage of an insoluble salt simulant is likely to produce a material that can not be easily removed from the storage container. Production of a relatively fresh simulant is best if pumping the simulant is necessary for testing purposes. The insoluble salt simulant described in this report represents the initial attempt to represent the material which may be encountered during final waste removal and tank cleaning. The final selected simulant was produced by heating and evaporation of a salt slurry sample to remove excess water and promote formation and precipitation of solids with solubility characteristics which are consistent with actual tank insoluble salt samples. The exact anion composition of the final product solids is not explicitly known since the chemical components in the final product are distributed between the solid and liquid phases. By combining the liquid phase analyses and total solids analysis with mass balance requirements a calculated composition of assumed simple compounds was obtained and is shown in Table 0-1. Additional improvements to and further characterization of the insoluble salt simulant are possible. During the development of these simulants it was recognized that: (1) Additional waste characterization on the residues from salt dissolution tests with actual waste samples to determine the amount of species such as carbonate, oxalate and aluminosilicate would allow fewer assumptions to be made in constructing an insoluble salt simulant. (2) The tank history will impact the amount and type of insoluble solids that exist in the salt dissolution solids. Varying the method of simulant production (elevated temperature processing time, degree of evaporation, amount of mixing (shear) during preparation, etc.) should be tested.« less
Stark, Peter C.; Kuske, Cheryl R.; Mullen, Kenneth I.
2002-01-01
A method for quantitating dsDNA in an aqueous sample solution containing an unknown amount of dsDNA. A first aqueous test solution containing a known amount of a fluorescent dye-dsDNA complex and at least one fluorescence-attenutating contaminant is prepared. The fluorescence intensity of the test solution is measured. The first test solution is diluted by a known amount to provide a second test solution having a known concentration of dsDNA. The fluorescence intensity of the second test solution is measured. Additional diluted test solutions are similarly prepared until a sufficiently dilute test solution having a known amount of dsDNA is prepared that has a fluorescence intensity that is not attenuated upon further dilution. The value of the maximum absorbance of this solution between 200-900 nanometers (nm), referred to herein as the threshold absorbance, is measured. A sample solution having an unknown amount of dsDNA and an absorbance identical to that of the sufficiently dilute test solution at the same chosen wavelength is prepared. Dye is then added to the sample solution to form the fluorescent dye-dsDNA-complex, after which the fluorescence intensity of the sample solution is measured and the quantity of dsDNA in the sample solution is determined. Once the threshold absorbance of a sample solution obtained from a particular environment has been determined, any similarly prepared sample solution taken from a similar environment and having the same value for the threshold absorbance can be quantified for dsDNA by adding a large excess of dye to the sample solution and measuring its fluorescence intensity.
Financing the Air Transportation Industry
NASA Technical Reports Server (NTRS)
Lloyd-Jones, D. J.
1972-01-01
The basic characteristics of the air transportation industry are outlined and it is shown how they affect financing requirements and patterns of production. The choice of financial timing is imperative in order to get the best interest rates available and to insure a fair return to investors. The fact that the industry cannot store its products has a fairly major effect on the amount of equipment to purchase, the amount of capital investment required, and the amount of return required to offset industry depriciation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Negrete, Oscar A.; Branda, Catherine; Hardesty, Jasper O. E.
In the response to and recovery from a critical homeland security event involving deliberate or accidental release of biological agents, initial decontamination efforts are necessarily followed by tests for the presence of residual live virus or bacteria. Such 'clearance sampling' should be rapid and accurate, to inform decision makers as they take appropriate action to ensure the safety of the public and of operational personnel. However, the current protocol for clearance sampling is extremely time-intensive and costly, and requires significant amounts of laboratory space and capacity. Detection of residual live virus is particularly problematic and time-consuming, as it requires evaluationmore » of replication potential within a eukaryotic host such as chicken embryos. The intention of this project was to develop a new method for clearance sampling, by leveraging Sandia's expertise in the biological and material sciences in order to create a C. elegans-based foam that could be applied directly to the entire contaminated area for quick and accurate detection of any and all residual live virus by means of a fluorescent signal. Such a novel technology for rapid, on-site detection of live virus would greatly interest the DHS, DoD, and EPA, and hold broad commercial potential, especially with regard to the transportation industry.« less
26 CFR 1.665(c)-1 - Accumulation distributions of certain foreign trusts; in general.
Code of Federal Regulations, 2010 CFR
2010-04-01
... below zero) by the amount of income required to be distributed currently. (In computing the amount of an... distributable net income reduced (but not below zero) by the amount required to be distributed currently. This... unless there is undistributed net income in at least one of the preceding taxable years which began after...
NASA Astrophysics Data System (ADS)
Hu, Li; Zhao, Nanjing; Liu, Wenqing; Meng, Deshuo; Fang, Li; Wang, Yin; Yu, Yang; Ma, Mingjun
2015-08-01
Heavy metals in water can be deposited on graphite flakes, which can be used as an enrichment method for laser-induced breakdown spectroscopy (LIBS) and is studied in this paper. The graphite samples were prepared with an automatic device, which was composed of a loading and unloading module, a quantitatively adding solution module, a rapid heating and drying module and a precise rotating module. The experimental results showed that the sample preparation methods had no significant effect on sample distribution and the LIBS signal accumulated in 20 pulses was stable and repeatable. With an increasing amount of the sample solution on the graphite flake, the peak intensity at Cu I 324.75 nm accorded with the exponential function with a correlation coefficient of 0.9963 and the background intensity remained unchanged. The limit of detection (LOD) was calculated through linear fitting of the peak intensity versus the concentration. The LOD decreased rapidly with an increasing amount of sample solution until the amount exceeded 20 mL and the correlation coefficient of exponential function fitting was 0.991. The LOD of Pb, Ni, Cd, Cr and Zn after evaporating different amounts of sample solution on the graphite flakes was measured and the variation tendency of their LOD with sample solution amounts was similar to the tendency for Cu. The experimental data and conclusions could provide a reference for automatic sample preparation and heavy metal in situ detection. supported by National Natural Science Foundation of China (No. 60908018), National High Technology Research and Development Program of China (No. 2013AA065502) and Anhui Province Outstanding Youth Science Fund of China (No. 1108085J19)
Macdiarmid, Jennie I; Kyle, Janet; Horgan, Graham W; Loe, Jennifer; Fyfe, Claire; Johnstone, Alexandra; McNeill, Geraldine
2012-09-01
Food systems account for 18-20% of UK annual greenhouse gas emissions (GHGEs). Recommendations for improving food choices to reduce GHGEs must be balanced against dietary requirements for health. We assessed whether a reduction in GHGEs can be achieved while meeting dietary requirements for health. A database was created that linked nutrient composition and GHGE data for 82 food groups. Linear programming was used iteratively to produce a diet that met the dietary requirements of an adult woman (19-50 y old) while minimizing GHGEs. Acceptability constraints were added to the model to include foods commonly consumed in the United Kingdom in sensible quantities. A sample menu was created to ensure that the quantities and types of food generated from the model could be combined into a realistic 7-d diet. Reductions in GHGEs of the diets were set against 1990 emission values. The first model, without any acceptability constraints, produced a 90% reduction in GHGEs but included only 7 food items, all in unrealistic quantities. The addition of acceptability constraints gave a more realistic diet with 52 foods but reduced GHGEs by a lesser amount of 36%. This diet included meat products but in smaller amounts than in the current diet. The retail cost of the diet was comparable to the average UK expenditure on food. A sustainable diet that meets dietary requirements for health with lower GHGEs can be achieved without eliminating meat or dairy products or increasing the cost to the consumer.
Tourtelot, H.A.
1964-01-01
The composition of nonmarine shales of Cretaceous age that contain less than 1 per cent organic carbon is assumed to represent the inherited minor-element composition of clayey sediments delivered to the Cretaceous sea that occupied the western interior region of North America. Differences in minor-element content between these samples and samples of 1. (a) nonmarine carbonaceous shales (1 to 17 per cent organic carbon), 2. (b) nearshore marine shales (less than 1 per cent organic carbon), and 3. (c) offshore marine shales (as much as 8 per cent organic carbon), all of the same age, reveal certain aspects of the role played by clay minerals and organic materials in affecting the minor-element composition of the rocks. The organic carbon in the nonmarine rocks occurs in disseminated coaly plant remains. The organic carbon in the marine rocks occurs predominantly in humic material derived from terrestrial plants. The close similarity in composition between the organic isolates from the marine samples and low-rank coal suggests that the amount of marine organic material in these rocks is small. The minor-element content of the two kinds of nonmarine shales is the same despite the relatively large amount of organic carbon in the carbonaceous shales. The nearshore marine shales, however, contain larger median amounts of arsenic, boron, chromium, vanadium and zinc than do the nonmarine rocks; and the offshore marine shales contain even larger amounts of these elements. Cobalt, molybdenum, lead and zirconium show insignificant differences in median content between the nonmarine and marine rocks, although as much as 25 ppm molybdenum is present in some offshore marine samples. The gallium content is lower in the marine than in the nonmarine samples. Copper and selenium contents of the two kinds of nonmarine rocks and the nearshore marine samples are the same, but those of the offshore samples are larger. In general, arsenic, chromium, copper, molybdenum, selenium, vanadium and zinc are concentrated in those offshore marine samples having the largest amounts of organic carbon, but samples with equal amounts of vanadium, for instance, may differ by a factor of 3 in their amount of organic carbon. Arsenic and molybdenum occur in some samples chiefly in syngenetic pyrite but also are present in relatively large amounts in samples that contain little pyrite. The data on nonmarine carbonaceous shales indicate that organic matter of terrestrial origin in marine shales contributes little to the minor-element content of such rocks. It is possible that marine organic matter, even though seemingly small in amount in marine shales, contributes to the minor-element composition of the shales. In addition to any such contribution, however, the great effectiveness in sorption processes of humic materials in conjunction with clay minerals suggests that such processes must have played an important role as these materials moved from the relatively dilute solutions of the nonmarine environment to the relatively concentrated solution of sea water. The volumes of sea water sufficient to supply for sorption the amounts of most minor elements in the offshore marine samples are insignificant compared to the volumes of water with which the clay and organic matter were in contact during their transportation and sedimentation. Consequently, the chemical characteristics of the environment in which the clay minerals and organic matter accumulated and underwent diagenesis probably were the most important factors in controlling the degree to which sorption processes and the formation of syngenetic minerals affected the final composition of the rocks. ?? 1969.
Zhang, Chun-Yun; Hu, Hui-Chao; Chai, Xin-Sheng; Pan, Lei; Xiao, Xian-Ming
2014-02-07
In this paper, we present a novel method for determining the maximal amount of ethane, a minor gas species, adsorbed in a shale sample. The method is based on the time-dependent release of ethane from shale samples measured by headspace gas chromatography (HS-GC). The study includes a mathematical model for fitting the experimental data, calculating the maximal amount gas adsorbed, and predicting results at other temperatures. The method is a more efficient alternative to the isothermal adsorption method that is in widespread use today. Copyright © 2013 Elsevier B.V. All rights reserved.
Binder, Harald; Sauerbrei, Willi; Royston, Patrick
2013-06-15
In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2) = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.
Maximizing recovery of water-soluble proteins through acetone precipitation.
Crowell, Andrew M J; Wall, Mark J; Doucette, Alan A
2013-09-24
Solvent precipitation is commonly used to purify protein samples, as seen with the removal of sodium dodecyl sulfate through acetone precipitation. However, in its current practice, protein loss is believed to be an inevitable consequence of acetone precipitation. We herein provide an in depth characterization of protein recovery through acetone precipitation. In 80% acetone, the precipitation efficiency for six of 10 protein standards was poor (ca. ≤15%). Poor recovery was also observed for proteome extracts, including bacterial and mammalian cells. As shown in this work, increasing the ionic strength of the solution dramatically improves the precipitation efficiency of individual proteins, and proteome mixtures (ca. 80-100% yield). This is obtained by including 1-30 mM NaCl, together with acetone (50-80%) which maximizes protein precipitation efficiency. The amount of salt required to restore the recovery correlates with the amount of protein in the sample, as well as the intrinsic protein charge, and the dielectric strength of the solution. This synergistic approach to protein precipitation in acetone with salt is consistent with a model of ion pairing in organic solvent, and establishes an improved method to recover proteins and proteome mixtures in high yield. Copyright © 2013 Elsevier B.V. All rights reserved.
Household hazardous waste data for the UK by direct sampling.
Slack, Rebecca J; Bonin, Michael; Gronow, Jan R; Van Santen, Anton; Voulvoulis, Nikolaos
2007-04-01
The amount of household hazardous waste (HHW) disposed of in the United Kingdom (UK) requires assessment. This paper describes a direct analysis study carried out in three areas in southeast England involving over 500 households. Each participating householder was provided with a special bin in which to place items corresponding to a list of HHW. The amount of waste collected was split into nine broad categories: batteries, home maintenance (DIY), vehicle upkeep, pesticides, pet care, pharmaceuticals, photographic chemicals, household cleaners, and printer cartridges. Over 1 T of waste was collected from the sample households over a 32-week period, which would correspond to an estimated 51,000 T if extrapolated to the UK population for the same period or over 7,000 T per month. Details of likely disposal routes adopted by householders were also sought, demonstrating the different pathways selected for different waste categories. Co-disposal with residual household waste dominated for waste batteries and veterinary medicines, hence avoiding classification as hazardous waste under new UK waste regulations. The information can be used to set a baseline for the management of HHW and provides information for an environmental risk assessment of the disposal of such wastes to landfill.
Tuinman, Albert A; Lewis, Linda A; Lewis, Samuel A
2003-06-01
The application of electrospray ionization mass spectrometry (ESI-MS) to trace-fiber color analysis is explored using acidic dyes commonly employed to color nylon-based fibers, as well as extracts from dyed nylon fibers. Qualitative information about constituent dyes and quantitative information about the relative amounts of those dyes present on a single fiber become readily available using this technique. Sample requirements for establishing the color identity of different samples (i.e., comparative trace-fiber analysis) are shown to be submillimeter. Absolute verification of dye mixture identity (beyond the comparison of molecular weights derived from ESI-MS) can be obtained by expanding the technique to include tandem mass spectrometry (ESI-MS/MS). For dyes of unknown origin, the ESI-MS/MS analyses may offer insights into the chemical structure of the compound-information not available from chromatographic techniques alone. This research demonstrates that ESI-MS is viable as a sensitive technique for distinguishing dye constituents extracted from a minute amount of trace-fiber evidence. A protocol is suggested to establish/refute the proposition that two fibers--one of which is available in minute quantity only--are of the same origin.
QUEST: Eliminating Online Supervised Learning for Efficient Classification Algorithms.
Zwartjes, Ardjan; Havinga, Paul J M; Smit, Gerard J M; Hurink, Johann L
2016-10-01
In this work, we introduce QUEST (QUantile Estimation after Supervised Training), an adaptive classification algorithm for Wireless Sensor Networks (WSNs) that eliminates the necessity for online supervised learning. Online processing is important for many sensor network applications. Transmitting raw sensor data puts high demands on the battery, reducing network life time. By merely transmitting partial results or classifications based on the sampled data, the amount of traffic on the network can be significantly reduced. Such classifications can be made by learning based algorithms using sampled data. An important issue, however, is the training phase of these learning based algorithms. Training a deployed sensor network requires a lot of communication and an impractical amount of human involvement. QUEST is a hybrid algorithm that combines supervised learning in a controlled environment with unsupervised learning on the location of deployment. Using the SITEX02 dataset, we demonstrate that the presented solution works with a performance penalty of less than 10% in 90% of the tests. Under some circumstances, it even outperforms a network of classifiers completely trained with supervised learning. As a result, the need for on-site supervised learning and communication for training is completely eliminated by our solution.
Determination of alkaloids in onion nectar by micellar electrokinetic chromatography.
Carolina Soto, Verónica; Jofré, Viviana Patricia; Galmarini, Claudio Romulo; Silva, María Fernanda
2016-07-01
Nectar is the most important floral reward offered by plants to insects. Minor components such as alkaloid compounds in nectar affect bee foraging, with great influence in seed production. CE is an advantageous tool for the analysis of unexplored samples such as onion nectar due to the limited amounts of samples. Considering the importance of these compounds, a simultaneous determination of nicotine, theophylline, theobromine, caffeine, harmaline, piperine in onion nectar by MEKC-UV is herein reported. The extraction of alkaloid compounds in nectar was performed by SPE using a homemade miniaturized column (C18 ). Effects of several important factors affecting extraction efficiency as well as electrophoretic performance were investigated to acquire optimum conditions. Under the proposed conditions, the analytes can be separated within 15 min in a 50 cm effective length capillary (75 μm id) at a separation voltage of 20 kV in 20 mmol/L sodium tretraborate, 100 mmol/L SDS. The amount of sample requirement was reduced up to 2000 times, when compared to traditional methods, reaching limits of detection as low as 0.0153 ng/L. For the first time, this study demonstrates that there are marked qualitative and quantitative differences in nectar alkaloids between open pollinated and male sterile lines (MSLs) and also within MSLs. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Huang, Po-Jung; Baghbani Kordmahale, Sina; Chou, Chao-Kai; Yamaguchi, Hirohito; Hung, Mien-Chie; Kameoka, Jun
2016-03-01
Signal transductions including multiple protein post-translational modifications (PTM), protein-protein interactions (PPI), and protein-nucleic acid interaction (PNI) play critical roles for cell proliferation and differentiation that are directly related to the cancer biology. Traditional methods, like mass spectrometry, immunoprecipitation, fluorescence resonance energy transfer, and fluorescence correlation spectroscopy require a large amount of sample and long processing time. "microchannel for multiple-parameter analysis of proteins in single-complex (mMAPS)"we proposed can reduce the process time and sample volume because this system is composed by microfluidic channels, fluorescence microscopy, and computerized data analysis. In this paper, we will present an automated mMAPS including integrated microfluidic device, automated stage and electrical relay for high-throughput clinical screening. Based on this result, we estimated that this automated detection system will be able to screen approximately 150 patient samples in a 24-hour period, providing a practical application to analyze tissue samples in a clinical setting.
TumorMap: Exploring the Molecular Similarities of Cancer Samples in an Interactive Portal.
Newton, Yulia; Novak, Adam M; Swatloski, Teresa; McColl, Duncan C; Chopra, Sahil; Graim, Kiley; Weinstein, Alana S; Baertsch, Robert; Salama, Sofie R; Ellrott, Kyle; Chopra, Manu; Goldstein, Theodore C; Haussler, David; Morozova, Olena; Stuart, Joshua M
2017-11-01
Vast amounts of molecular data are being collected on tumor samples, which provide unique opportunities for discovering trends within and between cancer subtypes. Such cross-cancer analyses require computational methods that enable intuitive and interactive browsing of thousands of samples based on their molecular similarity. We created a portal called TumorMap to assist in exploration and statistical interrogation of high-dimensional complex "omics" data in an interactive and easily interpretable way. In the TumorMap, samples are arranged on a hexagonal grid based on their similarity to one another in the original genomic space and are rendered with Google's Map technology. While the important feature of this public portal is the ability for the users to build maps from their own data, we pre-built genomic maps from several previously published projects. We demonstrate the utility of this portal by presenting results obtained from The Cancer Genome Atlas project data. Cancer Res; 77(21); e111-4. ©2017 AACR . ©2017 American Association for Cancer Research.
TumorMap: Exploring the Molecular Similarities of Cancer Samples in an Interactive Portal
Newton, Yulia; Novak, Adam M.; Swatloski, Teresa; McColl, Duncan C.; Chopra, Sahil; Graim, Kiley; Weinstein, Alana S.; Baertsch, Robert; Salama, Sofie R.; Ellrott, Kyle; Chopra, Manu; Goldstein, Theodore C.; Haussler, David; Morozova, Olena; Stuart, Joshua M.
2017-01-01
Vast amounts of molecular data are being collected on tumor samples, which provide unique opportunities for discovering trends within and between cancer subtypes. Such cross-cancer analyses require computational methods that enable intuitive and interactive browsing of thousands of samples based on their molecular similarity. We created a portal called TumorMap to assist in exploration and statistical interrogation of high-dimensional complex “omics” data in an interactive and easily interpretable way. In the TumorMap, samples are arranged on a hexagonal grid based on their similarity to one another in the original genomic space and are rendered with Google’s Map technology. While the important feature of this public portal is the ability for the users to build maps from their own data, we pre-built genomic maps from several previously published projects. We demonstrate the utility of this portal by presenting results obtained from The Cancer Genome Atlas project data. PMID:29092953
Kellogg, Joshua J.; Wallace, Emily D.; Graf, Tyler N.; Oberlies, Nicholas H.; Cech, Nadja B.
2018-01-01
Metabolomics has emerged as an important analytical technique for multiple applications. The value of information obtained from metabolomics analysis depends on the degree to which the entire metabolome is present and the reliability of sample treatment to ensure reproducibility across the study. The purpose of this study was to compare methods of preparing complex botanical extract samples prior to metabolomics profiling. Two extraction methodologies, accelerated solvent extraction and a conventional solvent maceration, were compared using commercial green tea [Camellia sinensis (L.) Kuntze (Theaceae)] products as a test case. The accelerated solvent protocol was first evaluated to ascertain critical factors influencing extraction using a D-optimal experimental design study. The accelerated solvent and conventional extraction methods yielded similar metabolite profiles for the green tea samples studied. The accelerated solvent extraction yielded higher total amounts of extracted catechins, was more reproducible, and required less active bench time to prepare the samples. This study demonstrates the effectiveness of accelerated solvent as an efficient methodology for metabolomics studies. PMID:28787673
Lim, Chi Young; Kim, Jung-Yeon; Yoon, Mi-Jin; Chang, Hang Seok; Park, Cheong Soo
2015-01-01
Purpose The radioiodine ablation therapy is required for patients who underwent a total thyroidectomy. Through a comparative review of a low iodine diet (LID) and a restricted iodine diet (RID), the study aims to suggest guidelines that are suitable for the conditions of Korea. Materials and Methods The study was conducted with 101 patients. With 24-hour urine samples from the patients after a 2-week restricted diet and after a 4-week restricted diet, the amount of iodine in the urine was estimated. The consumed radioiodine amounts for 2 hours and 24 hours were calculated. Results This study was conducted with 47 LID patients and 54 RID patients. The amounts of iodine in urine, the 2-week case and 4-week case for each group showed no significant differences. The amounts of iodine in urine between the two groups were both included in the range of the criteria for radioiodine ablation therapy. Also, 2 hours and 24 hours radioiodine consumption measured after 4-week restrictive diet did not show statistical differences between two groups. Conclusion A 2-week RID can be considered as a type of radioiodine ablation therapy after patients undergo a total thyroidectomy. PMID:26069126
27 CFR 40.133 - Amount of individual bond.
Code of Federal Regulations, 2010 CFR
2010-04-01
... amount, the manufacturer shall immediately file a strengthening or superseding bond as required by this subpart. The amount of any such bond (or the total amount including strengthening bonds, if any) need not...
27 CFR 40.133 - Amount of individual bond.
Code of Federal Regulations, 2011 CFR
2011-04-01
... amount, the manufacturer shall immediately file a strengthening or superseding bond as required by this subpart. The amount of any such bond (or the total amount including strengthening bonds, if any) need not...
MacLennan, Matthew S; Peru, Kerry M; Swyngedouw, Chris; Fleming, Ian; Chen, David D Y; Headley, John V
2018-05-15
Oil sands mining in Alberta, Canada, requires removal and stockpiling of considerable volumes of near-surface overburden material. This overburden includes lean oil sands (LOS) which cannot be processed economically but contain sparingly soluble petroleum hydrocarbons and naphthenic acids, which can leach into environmental waters. In order to measure and track the leaching of dissolved constituents and distinguish industrially derived organics from naturally occurring organics in local waters, practical methods were developed for characterizing multiple sources of contaminated water leakage. Capillary electrophoresis/positive-ion electrospray ionization low-resolution time-of-flight mass spectrometry (CE/LRMS), high-resolution negative-ion electrospray ionization Orbitrap mass spectrometry (HRMS) and conventional gas chromatography/flame ionization detection (GC/FID) were used to characterize porewater samples collected from within Athabasca LOS and mixed surficial materials. GC/FID was used to measure total petroleum hydrocarbon and HRMS was used to measure total naphthenic acid fraction components (NAFCs). HRMS and CE/LRMS were used to characterize samples according to source. The amounts of total petroleum hydrocarbon in each sample as measured by GC/FID ranged from 0.1 to 15.1 mg/L while the amounts of NAFCs as measured by HRMS ranged from 5.3 to 82.3 mg/L. Factors analysis (FA) on HRMS data visually demonstrated clustering according to sample source and was correlated to molecular formula. LRMS coupled to capillary electrophoresis separation (CE/LRMS) provides important information on NAFC isomers by adding analyte migration time data to m/z and peak intensity. Differences in measured amounts of total petroleum hydrocarbons by GC/FID and NAFCs by HRMS indicate that the two methods provide complementary information about the nature of dissolved organic species in a soil or water leachate samples. NAFC molecule class O x S y is a possible tracer for LOS seepage. CE/LRMS provides complementary information and is a feasible and practical option for source evaluation of NAFCs in water. Copyright © 2018 John Wiley & Sons, Ltd.
Higher certainty of the laser-induced damage threshold test with a redistributing data treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, Lars; Mrohs, Marius; Gyamfi, Mark
2015-10-15
As a consequence of its statistical nature, the measurement of the laser-induced damage threshold holds always risks to over- or underestimate the real threshold value. As one of the established measurement procedures, the results of S-on-1 (and 1-on-1) tests outlined in the corresponding ISO standard 21 254 depend on the amount of data points and their distribution over the fluence scale. With the limited space on a test sample as well as the requirements on test site separation and beam sizes, the amount of data from one test is restricted. This paper reports on a way to treat damage testmore » data in order to reduce the statistical error and therefore measurement uncertainty. Three simple assumptions allow for the assignment of one data point to multiple data bins and therefore virtually increase the available data base.« less
Determination of phosphorus in small amounts of protein samples by ICP-MS.
Becker, J Sabine; Boulyga, Sergei F; Pickhardt, Carola; Becker, J; Buddrus, Stefan; Przybylski, Michael
2003-02-01
Inductively coupled plasma mass spectrometry (ICP-MS) is used for phosphorus determination in protein samples. A small amount of solid protein sample (down to 1 micro g) or digest (1-10 micro L) protein solution was denatured in nitric acid and hydrogen peroxide by closed-microvessel microwave digestion. Phosphorus determination was performed with an optimized analytical method using a double-focusing sector field inductively coupled plasma mass spectrometer (ICP-SFMS) and quadrupole-based ICP-MS (ICP-QMS). For quality control of phosphorus determination a certified reference material (CRM), single cell proteins (BCR 273) with a high phosphorus content of 26.8+/-0.4 mg g(-1), was analyzed. For studies on phosphorus determination in proteins while reducing the sample amount as low as possible the homogeneity of CRM BCR 273 was investigated. Relative standard deviation and measurement accuracy in ICP-QMS was within 2%, 3.5%, 11% and 12% when using CRM BCR 273 sample weights of 40 mg, 5 mg, 1 mg and 0.3 mg, respectively. The lowest possible sample weight for an accurate phosphorus analysis in protein samples by ICP-MS is discussed. The analytical method developed was applied for the analysis of homogeneous protein samples in very low amounts [1-100 micro g of solid protein sample, e.g. beta-casein or down to 1 micro L of protein or digest in solution (e.g., tau protein)]. A further reduction of the diluted protein solution volume was achieved by the application of flow injection in ICP-SFMS, which is discussed with reference to real protein digests after protein separation using 2D gel electrophoresis.The detection limits for phosphorus in biological samples were determined by ICP-SFMS down to the ng g(-1) level. The present work discusses the figure of merit for the determination of phosphorus in a small amount of protein sample with ICP-SFMS in comparison to ICP-QMS.
Determination of shielding requirements for mammography.
Okunade, Akintunde Akangbe; Ademoroti, Olalekan Albert
2004-05-01
Shielding requirements for mammography when considerations are to be given to attenuation by compression paddle, breast tissue, grid and image receptor (intervening materials) has been investigated. By matching of the attenuation and hardening properties, comparisons are made between shielding afforded by breast tissue materials (water, Lucite and 50%-50% adipose-glandular tissue) and some materials considered for shielding diagnostic x-ray beams, namely lead, steel and gypsum wallboard. Results show that significant differences exist between the thickness required to produce equal attenuation and that required to produce equal hardening of a given incident beam. While attenuation equivalent thickness produces equal exposure, it does not produce equal hardening. For shielding purposes, equivalence in exposure reduction without equivalence in penetrating power of an emerging beam does not amount to equivalence in shielding affordable by two different materials. Presented are models and results of sample calculations of additional shielding requirements apart from that provided by intervening materials. The shielding requirements for the integrated beam emerging from intervening materials are different from those for the integrated beam emerging from materials (lead/steel/gypsum wallboard) with attenuation equivalent thicknesses of these intervening materials.
Lack of mutagens in deep-fat-fried foods obtained at the retail level.
Taylor, S L; Berg, C M; Shoptaugh, N H; Scott, V N
1982-04-01
The basic methylene chloride extract from 20 of 30 samples of foods fried in deep fat failed to elicit any mutagenic response that could be detected in the Salmonella typhimurium/mammalian microsome assay. The basic extracts of the remaining ten samples (all three chicken samples studied, two of the four potato-chip samples, one of four corn-chip samples, the sample of onion rings, two of six doughnuts, and one of three samples of french-fried potato) showed evidence of weak mutagenic activity. In these samples, amounts of the basic extract equivalent to 28.5-57 g of the original food sample were required to produce revertants at levels of 2.6-4.8 times the background level. Only two of the acidic methylene chloride extracts from the 30 samples exhibited mutagenic activity greater than 2.5 times the background reversion level, and in both cases (one corn-chip and one shrimp sample) the mutagenic response was quite weak. The basic extract of hamburgers fried in deep fat in a home-style fryer possessed higher levels of mutagenic activity (13 times the background reversion level). However, the mutagenic activity of deep-fried hamburgers is some four times lower than that of pan-fried hamburgers.
Rapid determination of 226Ra in environmental samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maxwell, Sherrod L.; Culligan, Brian K.
A new rapid method for the determination of {sup 228}Ra in natural water samples has been developed at the SRNL/EBL (Savannah River National Lab/ Environmental Bioassay Laboratory) that can be used for emergency response or routine samples. While gamma spectrometry can be employed with sufficient detection limits to determine {sup 228}Ra in solid samples (via {sup 228}Ac) , radiochemical methods that employ gas flow proportional counting techniques typically provide lower MDA (Minimal Detectable Activity) levels for the determination of {sup 228}Ra in water samples. Most radiochemical methods for {sup 228}Ra collect and purify {sup 228}Ra and allow for {sup 228}Acmore » daughter ingrowth for ~36 hours. In this new SRNL/EBL approach, {sup 228}Ac is collected and purified from the water sample without waiting to eliminate this delay. The sample preparation requires only about 4 hours so that {sup 228}Ra assay results on water samples can be achieved in < 6 hours. The method uses a rapid calcium carbonate precipitation enhanced with a small amount of phosphate added to enhance chemical yields (typically >90%), followed by rapid cation exchange removal of calcium. Lead, bismuth, uranium, thorium and protactinium isotopes are also removed by the cation exchange separation. {sup 228}Ac is eluted from the cation resin directly onto a DGA Resin cartridge attached to the bottom of the cation column to purify {sup 228}Ac. DGA Resin also removes lead and bismuth isotopes, along with Sr isotopes and {sup 90}Y. La is used to determine {sup 228}Ac chemical yield via ICP-MS, but {sup 133}Ba can also be used instead if ICP-MS assay is not available. Unlike some older methods, no lead or strontium holdback carriers or continual readjustment of sample pH is required.« less
Novel method of realizing metal freezing points by induced solidification
NASA Astrophysics Data System (ADS)
Ma, C. K.
1997-07-01
The freezing point of a pure metal, tf, is the temperature at which the solid and liquid phases are in equilibrium. The purest metal available is actually a dilute alloy. Normally, the liquidus point of a sample, tl, at which the amount of the solid phase in equilibrium with the liquid phase is minute, provides the closest approximation to tf. Thus the experimental realization of tf is a matter of realizing tl. The common method is to cool a molten sample continuously so that it supercools and recalesces. The highest temperature after recalescence is normally the best experimental value of tl. In the realization, supercooling of the sample at the sample container and the thermometer well is desirable for the formation of dual solid-liquid interfaces to thermally isolate the sample and the thermometer. However, the subsequent recalescence of the supercooled sample requires the formation of a certain amount of solid, which is not minute. Obviously, the plateau temperature is not the liquidus point. In this article we describe a method that minimizes supercooling. The condition that provides tl is closely approached so that the latter may be measured. As the temperature of the molten sample approaches the anticipated value of tl, a small solid of the same alloy is introduced into the sample to induce solidification. In general, solidification does not occur as long as the temperature is above or at tl, and occurs as soon as the sample supercools minutely. Thus tl can be obtained, in principle, by observing the temperature at which induced solidification begins. In case the solid is introduced after the sample has supercooled slightly, a slight recalescence results and the subsequent maximum temperature is a close approximation to tl. We demonstrate that the principle of induced solidification is indeed applicable to freezing point measurements by applying it to the design of a copper-freezing-point cell for industrial applications, in which a supercooled sample is reheated and then induced to solidify by the solidification of an auxiliary sample. Further experimental studies are necessary to assess the practical advantages and disadvantages of the induction method.
Parkington, Karisa B; Clements, Rebecca J; Landry, Oriane; Chouinard, Philippe A
2015-10-01
We examined how performance on an associative learning task changes in a sample of undergraduate students as a function of their autism-spectrum quotient (AQ) score. The participants, without any prior knowledge of the Japanese language, learned to associate hiragana characters with button responses. In the novel condition, 50 participants learned visual-motor associations without any prior exposure to the stimuli's visual attributes. In the familiar condition, a different set of 50 participants completed a session in which they first became familiar with the stimuli's visual appearance prior to completing the visual-motor association learning task. Participants with higher AQ scores had a clear advantage in the novel condition; the amount of training required reaching learning criterion correlated negatively with AQ. In contrast, participants with lower AQ scores had a clear advantage in the familiar condition; the amount of training required to reach learning criterion correlated positively with AQ. An examination of how each of the AQ subscales correlated with these learning patterns revealed that abilities in visual discrimination-which is known to depend on the visual ventral-stream system-may have afforded an advantage in the novel condition for the participants with the higher AQ scores, whereas abilities in attention switching-which are known to require mechanisms in the prefrontal cortex-may have afforded an advantage in the familiar condition for the participants with the lower AQ scores.
Mars sample return mission architectures utilizing low thrust propulsion
NASA Astrophysics Data System (ADS)
Derz, Uwe; Seboldt, Wolfgang
2012-08-01
The Mars sample return mission is a flagship mission within ESA's Aurora program and envisioned to take place in the timeframe of 2020-2025. Previous studies developed a mission architecture consisting of two elements, an orbiter and a lander, each utilizing chemical propulsion and a heavy launcher like Ariane 5 ECA. The lander transports an ascent vehicle to the surface of Mars. The orbiter performs a separate impulsive transfer to Mars, conducts a rendezvous in Mars orbit with the sample container, delivered by the ascent vehicle, and returns the samples back to Earth in a small Earth entry capsule. Because the launch of the heavy orbiter by Ariane 5 ECA makes an Earth swing by mandatory for the trans-Mars injection, its total mission time amounts to about 1460 days. The present study takes a fresh look at the subject and conducts a more general mission and system analysis of the space transportation elements including electric propulsion for the transfer. Therefore, detailed spacecraft models for orbiters, landers and ascent vehicles are developed. Based on that, trajectory calculations and optimizations of interplanetary transfers, Mars entries, descents and landings as well as Mars ascents are carried out. The results of the system analysis identified electric propulsion for the orbiter as most beneficial in terms of launch mass, leading to a reduction of launch vehicle requirements and enabling a launch by a Soyuz-Fregat into GTO. Such a sample return mission could be conducted within 1150-1250 days. Concerning the lander, a separate launch in combination with electric propulsion leads to a significant reduction of launch vehicle requirements, but also requires a large number of engines and correspondingly a large power system. Therefore, a lander performing a separate chemical transfer could possibly be more advantageous. Alternatively, a second possible mission architecture has been developed, requiring only one heavy launch vehicle (e.g., Proton). In that case the lander is transported piggyback by the electrically propelled orbiter.
Chen, Guang; Liu, Jianjun; Liu, Mengge; Li, Guoliang; Sun, Zhiwei; Zhang, Shijuan; Song, Cuihua; Wang, Hua; Suo, Yourui; You, Jinmao
2014-07-25
A new fluorescent reagent, 1-(1H-imidazol-1-yl)-2-(2-phenyl-1H-phenanthro[9,10-d]imidazol-1-yl)ethanone (IPPIE), is synthesized, and a simple pretreatment based on ultrasonic-assisted derivatization microextraction (UDME) with IPPIE is proposed for the selective derivatization of 12 aliphatic amines (C1: methylamine-C12: dodecylamine) in complex matrix samples (irrigation water, river water, waste water, cultivated soil, riverbank soil and riverbed soil). Under the optimal experimental conditions (solvent: ACN-HCl, catalyst: none, molar ratio: 4.3, time: 8 min and temperature: 80°C), micro amount of sample (40 μL; 5mg) can be pretreated in only 10 min, with no preconcentration, evaporation or other additional manual operations required. The interfering substances (aromatic amines, aliphatic alcohols and phenols) get the derivatization yields of <5%, causing insignificant matrix effects (<4%). IPPIE-analyte derivatives are separated by high performance liquid chromatography (HPLC) and quantified by fluorescence detection (FD). The very low instrumental detection limits (IDL: 0.66-4.02 ng/L) and method detection limits (MDL: 0.04-0.33 ng/g; 5.96-45.61 ng/L) are achieved. Analytes are further identified from adjacent peaks by on-line ion trap mass spectrometry (MS), thereby avoiding additional operations for impurities. With this UDME-HPLC-FD-MS method, the accuracy (-0.73-2.12%), precision (intra-day: 0.87-3.39%; inter-day: 0.16-4.12%), recovery (97.01-104.10%) and sensitivity were significantly improved. Successful applications in environmental samples demonstrate the superiority of this method in the sensitive, accurate and rapid determination of trace aliphatic amines in micro amount of complex samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Mariño-Repizo, Leonardo; Goicoechea, Hector; Raba, Julio; Cerutti, Soledad
2018-06-07
A novel, simple, easy and cheap sample treatment strategy based on salting-out assisted liquid-liquid extraction (SALLE) for ochratoxin A (OTA) ultra-trace analysis in beer samples using ultra-high performance liquid chromatography-tandem mass spectrometry determination was developed. The factors involved in the efficiency of pretreatment were studied employing factorial design in the screening phase and the optimal conditions of the significant variables on the analytical response were evaluated using a central composite face-centred design (CCF). Consequently, the amount of salt ((NH 4 ) 2 SO 4 ), together with the volumes of sample, hydrophilic (acetone) and nonpolar (toluene) solvents, and times of vortexing and centrifugation were optimized. Under optimized conditions, the limits of detection (LOD) and quantification (LOQ) were 0.02 µg l -1 and 0.08 µg l -1 respectively. OTA extraction recovery by SALLE was approximately 90% (0.2 µg l -1 ). Furthermore, the methodology was in agreement with EU Directive requirements and was successfully applied for analysis of beer samples.
Bioremediation of diesel from a rocky shoreline in an arid tropical climate.
Guerin, Turlough F
2015-10-15
A non invasive sampling and remediation strategy was developed and implemented at shoreline contaminated with spilt diesel. To treat the contamination, in a practical, cost-effective, and safe manner (to personnel working on the stockpiles and their ship loading activity), a non-invasive sampling and remediation strategy was designed and implemented since the location and nature of the impacted geology (rock fill) and sediment, precluded conventional ex-situ and any in-situ treatment where drilling is required. A bioremediation process using surfactant, and added N & P and increased aeration, increased the degradation rate allowing the site owner to meet their regulatory obligations. Petroleum hydrocarbons decreased from saturation concentrations to less than detectable amounts at the completion of treatment. Copyright © 2015 Elsevier Ltd. All rights reserved.
Intelligent Detection of Structure from Remote Sensing Images Based on Deep Learning Method
NASA Astrophysics Data System (ADS)
Xin, L.
2018-04-01
Utilizing high-resolution remote sensing images for earth observation has become the common method of land use monitoring. It requires great human participation when dealing with traditional image interpretation, which is inefficient and difficult to guarantee the accuracy. At present, the artificial intelligent method such as deep learning has a large number of advantages in the aspect of image recognition. By means of a large amount of remote sensing image samples and deep neural network models, we can rapidly decipher the objects of interest such as buildings, etc. Whether in terms of efficiency or accuracy, deep learning method is more preponderant. This paper explains the research of deep learning method by a great mount of remote sensing image samples and verifies the feasibility of building extraction via experiments.
Urban, Lorien E.; Weber, Judith L.; Heyman, Melvin B.; Schichtl, Rachel L.; Verstraete, Sofia; Lowery, Nina S.; Das, Sai Krupa; Schleicher, Molly M.; Rogers, Gail; Economos, Christina; Masters, William A.; Roberts, Susan B.
2017-01-01
Background Excess energy intake from meals consumed away from home is implicated as a major contributor to obesity, and ~50% of US restaurants are individual or small-chain (non–chain) establishments that do not provide nutrition information. Objective To measure the energy content of frequently ordered meals in non–chain restaurants in three US locations, and compare with the energy content of meals from large-chain restaurants, energy requirements, and food database information. Design A multisite random-sampling protocol was used to measure the energy contents of the most frequently ordered meals from the most popular cuisines in non–chain restaurants, together with equivalent meals from large-chain restaurants. Setting Meals were obtained from restaurants in San Francisco, CA; Boston, MA; and Little Rock, AR, between 2011 and 2014. Main outcome measures Meal energy content determined by bomb calorimetry. Statistical analysis performed Regional and cuisine differences were assessed using a mixed model with restaurant nested within region×cuisine as the random factor. Paired t tests were used to evaluate differences between non–chain and chain meals, human energy requirements, and food database values. Results Meals from non–chain restaurants contained 1,205±465 kcal/meal, amounts that were not significantly different from equivalent meals from large-chain restaurants (+5.1%; P=0.41). There was a significant effect of cuisine on non–chain meal energy, and three of the four most popular cuisines (American, Italian, and Chinese) had the highest mean energy (1,495 kcal/meal). Ninety-two percent of meals exceeded typical energy requirements for a single eating occasion. Conclusions Non–chain restaurants lacking nutrition information serve amounts of energy that are typically far in excess of human energy requirements for single eating occasions, and are equivalent to amounts served by the large-chain restaurants that have previously been criticized for providing excess energy. Restaurants in general, rather than specific categories of restaurant, expose patrons to excessive portions that induce overeating through established biological mechanisms. PMID:26803805
Urban, Lorien E; Weber, Judith L; Heyman, Melvin B; Schichtl, Rachel L; Verstraete, Sofia; Lowery, Nina S; Das, Sai Krupa; Schleicher, Molly M; Rogers, Gail; Economos, Christina; Masters, William A; Roberts, Susan B
2016-04-01
Excess energy intake from meals consumed away from home is implicated as a major contributor to obesity, and ∼50% of US restaurants are individual or small-chain (non-chain) establishments that do not provide nutrition information. To measure the energy content of frequently ordered meals in non-chain restaurants in three US locations, and compare with the energy content of meals from large-chain restaurants, energy requirements, and food database information. A multisite random-sampling protocol was used to measure the energy contents of the most frequently ordered meals from the most popular cuisines in non-chain restaurants, together with equivalent meals from large-chain restaurants. Meals were obtained from restaurants in San Francisco, CA; Boston, MA; and Little Rock, AR, between 2011 and 2014. Meal energy content determined by bomb calorimetry. Regional and cuisine differences were assessed using a mixed model with restaurant nested within region×cuisine as the random factor. Paired t tests were used to evaluate differences between non-chain and chain meals, human energy requirements, and food database values. Meals from non-chain restaurants contained 1,205±465 kcal/meal, amounts that were not significantly different from equivalent meals from large-chain restaurants (+5.1%; P=0.41). There was a significant effect of cuisine on non-chain meal energy, and three of the four most popular cuisines (American, Italian, and Chinese) had the highest mean energy (1,495 kcal/meal). Ninety-two percent of meals exceeded typical energy requirements for a single eating occasion. Non-chain restaurants lacking nutrition information serve amounts of energy that are typically far in excess of human energy requirements for single eating occasions, and are equivalent to amounts served by the large-chain restaurants that have previously been criticized for providing excess energy. Restaurants in general, rather than specific categories of restaurant, expose patrons to excessive portions that induce overeating through established biological mechanisms. Copyright © 2016 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
10 CFR 26.168 - Blind performance testing.
Code of Federal Regulations, 2014 CFR
2014-01-01
... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...
10 CFR 26.168 - Blind performance testing.
Code of Federal Regulations, 2010 CFR
2010-01-01
... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...
10 CFR 26.168 - Blind performance testing.
Code of Federal Regulations, 2011 CFR
2011-01-01
... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...
10 CFR 26.168 - Blind performance testing.
Code of Federal Regulations, 2013 CFR
2013-01-01
... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...
10 CFR 26.168 - Blind performance testing.
Code of Federal Regulations, 2012 CFR
2012-01-01
... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...
NASA Astrophysics Data System (ADS)
Thuy, Tran Thi; Thorsén, Gunnar
2013-07-01
The serum clearance rate of therapeutic antibodies is important as it affects the clinical efficacy, required dose, and dose frequency. The glycosylation of antibodies has in some studies been shown to have an impact on the elimination rates in vivo. Monitoring changes to the glycan profiles in pharmacokinetics studies can reveal whether the clearance rates of the therapeutic antibodies depend on the different glycoforms, thereby providing useful information for improvement of the drugs. In this paper, a novel method for glycosylation analysis of therapeutic antibodies in serum samples is presented. A microfluidic compact-disc (CD) platform in combination with MALDI-MS was used to monitor changes to the glycosylation profiles of samples incubated in vitro. Antibodies were selectively purified from serum using immunoaffinity capture on immobilized target antigens. The glycans were enzymatically released, purified, and finally analyzed by MALDI-TOF-MS. To simulate changes to glycan profiles after administration in vivo, a therapeutic antibody was incubated in serum with the enzyme α1-2,3 mannosidase to artificially reduce the amount of the high mannose glycoforms. Glycan profiles were monitored at specific intervals during the incubation. The relative abundance of the high mannose 5 glycoform was clearly found to decrease and, simultaneously, that of high mannose 4 increased over the incubation period. The method can be performed in a rapid, parallel, and automated fashion for glycosylation profiling consuming low amounts of samples and reagents. This can contribute to less labor work and reduced cost of the studies of therapeutic antibodies glycosylation in vitro and in vivo.
NASA Astrophysics Data System (ADS)
Kocken, I.; Ziegler, M.
2017-12-01
Clumped isotope measurements on carbonates are a quickly developing and promising palaeothermometry proxy1-3. Developments in the field have brought down the necessary sample amount and improved the precision and accuracy of the measurements. The developments have included inter-laboratory comparison and the introduction of an absolute reference frame4, determination of acid fractionation effects5, correction for the pressure baseline6, as well as improved temperature calibrations2, and most recently new approaches to improve efficiency in terms of sample gas usage7. However, a large-scale application of clumped isotope thermometry is still hampered by required large sample amounts, but also the time-consuming analysis. In general, a lot of time is goes into the measurement of standards. Here we present a study on the optimal ratio between standard- and sample measurements using the Kiel Carbonate Device method. We also consider the optimal initial signal intensity. We analyse ETH-standard measurements from several months to determine the measurement regime with the highest precision and optimised measurement time management.References 1. Eiler, J. M. Earth Planet. Sci. Lett. 262, 309-327 (2007).2. Kelson, J. R., et al. Geochim. Cosmochim. Acta 197, 104-131 (2017).3. Kele, S. et al. Geochim. Cosmochim. Acta 168, 172-192 (2015).4. Dennis, K. J. et al. Geochim. Cosmochim. Acta 75, 7117-7131 (2011).5. Müller, I. A. et al. Chem. Geol. 449, 1-14 (2017).6. Meckler, A. N. et al. Rapid Commun. Mass Spectrom. 28, 1705-1715 (2014).7. Hu, B. et al. Rapid Commun. Mass Spectrom. 28, 1413-1425 (2014).
Dynamic Method for Identifying Collected Sample Mass
NASA Technical Reports Server (NTRS)
Carson, John
2008-01-01
G-Sample is designed for sample collection missions to identify the presence and quantity of sample material gathered by spacecraft equipped with end effectors. The software method uses a maximum-likelihood estimator to identify the collected sample's mass based on onboard force-sensor measurements, thruster firings, and a dynamics model of the spacecraft. This makes sample mass identification a computation rather than a process requiring additional hardware. Simulation examples of G-Sample are provided for spacecraft model configurations with a sample collection device mounted on the end of an extended boom. In the absence of thrust knowledge errors, the results indicate that G-Sample can identify the amount of collected sample mass to within 10 grams (with 95-percent confidence) by using a force sensor with a noise and quantization floor of 50 micrometers. These results hold even in the presence of realistic parametric uncertainty in actual spacecraft inertia, center-of-mass offset, and first flexibility modes. Thrust profile knowledge is shown to be a dominant sensitivity for G-Sample, entering in a nearly one-to-one relationship with the final mass estimation error. This means thrust profiles should be well characterized with onboard accelerometers prior to sample collection. An overall sample-mass estimation error budget has been developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.
Uniform Laser Excitation And Detection In Capillary Array Electrophoresis System And Method.
Li, Qingbo; Zhou, Songsan; Liu, Changsheng
2003-10-07
A capillary electrophoresis system comprises capillaries positioned in parallel to each other forming a plane. The capillaries are configured to allow samples to migrate. A light source is configured to illuminate the capillaries and the samples therein. This causes the samples to emit light. A lens is configured to receive the light emitted by the samples and positioned directly over a first group of the capillaries and obliquely over a second group of the capillaries. The light source is further configured to illuminate the second group of capillaries more than the first group of the capillaries such that amount of light received by the lens from the first group of capillaries is substantially identical to amount of light received from the second group of capillaries when an identical amount of the samples is migrating through the first and second group capillaries.
Benthic Foraminifera Clumped Isotope Calibration
NASA Astrophysics Data System (ADS)
Piasecki, A.; Marchitto, T. M., Jr.; Bernasconi, S. M.; Grauel, A. L.; Tisserand, A. A.; Meckler, N.
2017-12-01
Due to the widespread spatial and temporal distribution of benthic foraminifera within ocean sediments, they are a commonly used for reconstructing past ocean temperatures and environmental conditions. Many foraminifera-based proxies, however, require calibration schemes that are species specific, which becomes complicated in deep time due to extinct species. Furthermore, calibrations often depend on seawater chemistry being stable and/or constrained, which is not always the case over significant climate state changes like the Eocene Oligocene Transition. Here we study the effect of varying benthic foraminifera species using the clumped isotope proxy for temperature. The benefit of this proxy is that it is independent of seawater chemistry, whereas the downside is that it requires a relatively large sample amounts. Due to recent advancements in sample processing that reduce the sample weight by a factor of 10, clumped isotopes can now be applied to a range paleoceanographic questions. First however, we need to prove that, unlike for other proxies, there are no interspecies differences with clumped isotopes, as is predicted by first principles modeling. We used a range of surface sediment samples covering a temperature range of 1-20°C from the Pacific, Mediterranean, Bahamas, and the Atlantic, and measured the clumped isotope composition of 11 different species of benthic foraminifera. We find that there are indeed no discernible species-specific differences within the sample set. In addition, the samples have the same temperature response to the proxy as inorganic carbonate samples over the same temperature range. As a result, we can now apply this proxy to a wide range of samples and foraminifera species from different ocean basins with different ocean chemistry and be confident that observed signals reflect variations in temperature.
Impact of ICT on Performance of Construction Companies in Slovakia
NASA Astrophysics Data System (ADS)
Mesároš, Peter; Mandičák, Tomáš
2017-10-01
Information and communication technologies became a part of management tools in modern companies. Construction industry and its participants deal with a serious requirement for processing the huge amount of information on construction projects including design, construction, time and cost parameters, economic efficiency and sustainability. To fulfil this requirement, companies have to use appropriate ICT tools. Aim of the paper is to examine the impact of ICT exploitation on performance of construction companies. The impact of BIM tools, ERP systems and controlling system on cost and profit indicators will be measured on the sample of 85 companies from construction industry in Slovakia. Enterprise size, enterprise ownership and role in construction process will be set as independent variables for statistical analyse. The results will be considered for different groups of companies.
Ondigo, Bartholomew N; Park, Gregory S; Gose, Severin O; Ho, Benjamin M; Ochola, Lyticia A; Ayodo, George O; Ofulla, Ayub V; John, Chandy C
2012-12-21
Multiplex cytometric bead assay (CBA) have a number of advantages over ELISA for antibody testing, but little information is available on standardization and validation of antibody CBA to multiple Plasmodium falciparum antigens. The present study was set to determine optimal parameters for multiplex testing of antibodies to P. falciparum antigens, and to compare results of multiplex CBA to ELISA. Antibodies to ten recombinant P. falciparum antigens were measured by CBA and ELISA in samples from 30 individuals from a malaria endemic area of Kenya and compared to known positive and negative control plasma samples. Optimal antigen amounts, monoplex vs multiplex testing, plasma dilution, optimal buffer, number of beads required were assessed for CBA testing, and results from CBA vs. ELISA testing were compared. Optimal amounts for CBA antibody testing differed according to antigen. Results for monoplex CBA testing correlated strongly with multiplex testing for all antigens (r = 0.88-0.99, P values from <0.0001 - 0.004), and antibodies to variants of the same antigen were accurately distinguished within a multiplex reaction. Plasma dilutions of 1:100 or 1:200 were optimal for all antigens for CBA testing. Plasma diluted in a buffer containing 0.05% sodium azide, 0.5% polyvinylalcohol, and 0.8% polyvinylpyrrolidone had the lowest background activity. CBA median fluorescence intensity (MFI) values with 1,000 antigen-conjugated beads/well did not differ significantly from MFI with 5,000 beads/well. CBA and ELISA results correlated well for all antigens except apical membrane antigen-1 (AMA-1). CBA testing produced a greater range of values in samples from malaria endemic areas and less background reactivity for blank samples than ELISA. With optimization, CBA may be the preferred method of testing for antibodies to P. falciparum antigens, as CBA can test for antibodies to multiple recombinant antigens from a single plasma sample and produces a greater range of values in positive samples and lower background readings for blank samples than ELISA.
Properties of Silurian shales from the Barrandian Basin, Czech Republic
NASA Astrophysics Data System (ADS)
Weishauptová, Zuzana; Přibyl, Oldřich; Sýkorová, Ivana
2017-04-01
Although shale gas-bearing deposits have a markedly lower gas content than coal deposits, great attention has recently been paid to shale gas as a new potential source of fossil energy. Shale gas extraction is considered to be quite economical, despite the lower sorption capacity of shales, which is only about 10% of coal sorption capacities The selection of a suitable locality for extracting shale gas requires the sorption capacity of the shale to be determined. The sorption capacity is determined in the laboratory by measuring the amount of methane absorbed in a shale specimen at a pressure and a temperature corresponding to in situ conditions, using high pressure sorption. According to the principles of reversibility of adsorption/desorption, this amount should be roughly related to the amount of gas released by forced degassing. High pressure methane sorption isotherms were measured on seven representative samples of Silurian shales from the Barrandian Basin, Czech Republic. Excess sorption measurements were performed at a temperature of 45oC and at pressures up to 15 MPa on dry samples, using a manometric method. Experimental methane high-pressure isotherms were fitted to a modified Langmuir equation. The maximum measured excess sorption parameter and the Langmuir sorption capacity parameter were used to study the effect of TOC content, organic maturity, inorganic components and porosity on the methane sorption capacity. The studied shale samples with random reflectance of graptolite 0.56 to 1.76% had a very low TOC content and dominant mineral fractions. Illite was the prevailing clay mineral. The sample porosity ranged from 4.6 to 18.8%. In most samples, the micropore volumes were markedly lower than the meso- and macropore volumes. In the Silurian black shales, the occurrence of fractures parallel with the original sedimentary bending was highly significant. A greater proportion of fragments of carbonaceous particles of graptolites and bitumens in the Barrandian Silurian shales had a smooth surface without pores. No relation has been proven between TOC-normalized excess sorption capacities or the TOC-normalized Langmuir sorption capacities and thermal maturation of the shales. The methane sorption capacities of shale samples show a positive correlation with TOC and a positive correlation with the clay content. The highest sorption capacity was observed in shale samples with the highest percentage of micropores, indicating that the micropore volume in the organic matter and clay minerals is a principal factor affecting the sorption capacity of the shale samples.
10 CFR 140.13b - Amount of liability insurance required for uranium enrichment facilities.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 2 2011-01-01 2011-01-01 false Amount of liability insurance required for uranium... required for uranium enrichment facilities. Each holder of a license issued under Parts 40 or 70 of this chapter for a uranium enrichment facility that involves the use of source material or special nuclear...
10 CFR 140.13b - Amount of liability insurance required for uranium enrichment facilities.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 2 2012-01-01 2012-01-01 false Amount of liability insurance required for uranium... required for uranium enrichment facilities. Each holder of a license issued under Parts 40 or 70 of this chapter for a uranium enrichment facility that involves the use of source material or special nuclear...
10 CFR 140.13b - Amount of liability insurance required for uranium enrichment facilities.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 2 2013-01-01 2013-01-01 false Amount of liability insurance required for uranium... required for uranium enrichment facilities. Each holder of a license issued under Parts 40 or 70 of this chapter for a uranium enrichment facility that involves the use of source material or special nuclear...
10 CFR 140.13b - Amount of liability insurance required for uranium enrichment facilities.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 2 2014-01-01 2014-01-01 false Amount of liability insurance required for uranium... required for uranium enrichment facilities. Each holder of a license issued under Parts 40 or 70 of this chapter for a uranium enrichment facility that involves the use of source material or special nuclear...
Villanelli, Fabio; Giocaliere, Elisa; Malvagia, Sabrina; Rosati, Anna; Forni, Giulia; Funghini, Silvia; Shokry, Engy; Ombrone, Daniela; Della Bona, Maria Luisa; Guerrini, Renzo; la Marca, Giancarlo
2015-02-02
Phenytoin (PHT) is one of the most commonly used anticonvulsant drugs for the treatment of epilepsy and bipolar disorders. The large amount of plasma required by conventional methods for drug quantification makes mass spectrometry combined with dried blood spot (DBS) sampling crucial for pediatric patients where therapeutic drug monitoring or pharmacokinetic studies may be difficult to realize. DBS represents a new convenient sampling support requiring minimally invasive blood drawing and providing long-term stability of samples and less expensive shipment and storage. The aim of this study was to develop a LC-MS/MS method for the quantification of PHT on DBS. This analytical method was validated and gave good linearity (r(2)=0.999) in the range of 0-100mg/l. LOQ and LOD were 1.0mg/l and 0.3mg/l, respectively. The drug extraction from paper was performed in a few minutes using a mixture composed of organic solvent for 80%. The recovery ranged from 85 to 90%; PHT in DBS showed to be stable at different storage temperatures for one month. A good correlation was also obtained between PHT plasma and DBS concentrations. This method is both precise and accurate and appears to be particularly suitable to monitor treatment with a simple and convenient sample collection procedure. Copyright © 2014 Elsevier B.V. All rights reserved.
Assessment of two thermally treated drill mud wastes for landfill containment applications.
Carignan, Marie-Pierre; Lake, Craig B; Menzies, Todd
2007-10-01
Offshore oil and gas drilling operations generate significant amounts of drill mud waste, some of which is transported onshore for subsequent thermal treatment (i.e. via thermal remediation). This treatment process results in a mineral waste by-product (referred to as thermally treated drill mud waste; TTDMW). Bentonites are originally present in many of the drill mud products and it is hypothesized that TTDMW can be utilized in landfill containment applications (i.e. cover or base liner). The objective of this paper is to examine the feasibility of this application by performing various physical and chemical tests on two TTDMW samples. It is shown that the two TTDMW samples contained relatively small amounts of clay-sized minerals although hydraulic conductivity values are found to be less than 10(-8) m/s. Organic carbon contents of the samples were approximately 2%. Mineralogy characterization of the samples confirmed varying amounts of smectite, however, peak friction angles for a TTDMW sample was greater than 36 degrees. Chemical characterization of the TTDMW samples show potential leaching of barium and small amounts of other heavy metals. Discussion is provided in the paper on suggestions to assist in overcoming regulatory issues associated with utilization of TTDMW in landfill containment applications.
Della Pelle, Flavio; González, María Cristina; Sergi, Manuel; Del Carlo, Michele; Compagnone, Dario; Escarpa, Alberto
2015-07-07
In this work, a rapid and simple gold nanoparticle (AuNPs)-based colorimetric assay meets a new type of synthesis of AuNPs in organic medium requiring no sample extraction. The AuNPs synthesis extraction-free approach strategically involves the use of dimethyl sulfoxide (DMSO) acting as an organic solvent for simultaneous sample analyte solubilization and AuNPs stabilization. Moreover, DMSO works as a cryogenic protector avoiding solidification at the temperatures used to block the synthesis. In addition, the chemical function as AuNPs stabilizers of the sample endogenous fatty acids is also exploited, avoiding the use of common surfactant AuNPs stabilizers, which, in an organic/aqueous medium, rise to the formation of undesirable emulsions. This is controlled by adding a fat analyte free sample (sample blank). The method was exhaustively applied for the determination of total polyphenols in two selected kinds of fat-rich liquid and solid samples with high antioxidant activity and economic impact: olive oil (n = 28) and chocolate (n = 16) samples. Fatty sample absorbance is easily followed by the absorption band of localized surface plasmon resonance (LSPR) at 540 nm and quantitation is refereed to gallic acid equivalents. A rigorous evaluation of the method was performed by comparison with the well and traditionally established Folin-Ciocalteu (FC) method, obtaining an excellent correlation for olive oil samples (R = 0.990, n = 28) and for chocolate samples (R = 0.905, n = 16). Additionally, it was also found that the proposed approach was selective (vs other endogenous sample tocopherols and pigments), fast (15-20 min), cheap and simple (does not require expensive/complex equipment), with a very limited amount of sample (30 μL) needed and a significant lower solvent consumption (250 μL in 500 μL total reaction volume) compared to classical methods.
Ehama, Makoto; Hashihama, Fuminori; Kinouchi, Shinko; Kanda, Jota; Saito, Hiroaki
2016-06-01
Determining the total particulate phosphorus (TPP) and particulate inorganic phosphorus (PIP) in oligotrophic oceanic water generally requires the filtration of a large amount of water sample. This paper describes methods that require small filtration volumes for determining the TPP and PIP concentrations. The methods were devised by validating or improving conventional sample processing and by applying highly sensitive liquid waveguide spectrophotometry to the measurements of oxidized or acid-extracted phosphate from TPP and PIP, respectively. The oxidation of TPP was performed by a chemical wet oxidation method using 3% potassium persulfate. The acid extraction of PIP was initially carried out based on the conventional extraction methodology, which requires 1M HCl, followed by the procedure for decreasing acidity. While the conventional procedure for acid removal requires a ten-fold dilution of the 1M HCl extract with purified water, the improved procedure proposed in this study uses 8M NaOH solution for neutralizing 1M HCl extract in order to reduce the dilution effect. An experiment for comparing the absorbances of the phosphate standard dissolved in 0.1M HCl and of that dissolved in a neutralized solution [1M HCl: 8M NaOH=8:1 (v:v)] exhibited a higher absorbance in the neutralized solution. This indicated that the improved procedure completely removed the acid effect, which reduces the sensitivity of the phosphate measurement. Application to an ultraoligotrophic water sample showed that the TPP concentration in a 1075mL-filtered sample was 8.4nM with a coefficient of variation (CV) of 4.3% and the PIP concentration in a 2300mL-filtered sample was 1.3nM with a CV of 6.1%. Based on the detection limit (3nM) of the sensitive phosphate measurement and the ambient TPP and PIP concentrations of the ultraoligotrophic water, the minimum filtration volumes required for the detection of TPP and PIP were estimated to be 15 and 52mL, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.
An X-ray transparent microfluidic platform for screening of the phase behavior of lipidic mesophases
Khvostichenko, Daria S.; Kondrashkina, Elena; Perry, Sarah L.; Pawate, Ashtamurthy S.; Brister, Keith
2013-01-01
Lipidic mesophases are a class of highly ordered soft materials that form when certain lipids are mixed with water. Understanding the relationship between the composition and the microstructure of mesophases is necessary for fundamental studies of self-assembly in amphiphilic systems and for applications, such as crystallization of membrane proteins. However, the laborious formulation protocol for highly viscous mesophases and the large amounts of material required for sample formulation are significant obstacles in such studies. Here we report a microfluidic platform that facilitates investigations of the phase behavior of mesophases by reducing sample consumption, and automating and parallelizing sample formulation. The mesophases were formulated on-chip using less than 40 nL of material per sample and their microstructure was analyzed in situ using small-angle X-ray scattering (SAXS). The 220 μm-thick X-ray compatible platform was comprised of thin polydimethylsiloxane (PDMS) layers sandwiched between cyclic olefin copolymer (COC) sheets. Uniform mesophases were prepared using an active on-chip mixing strategy coupled with periodic cooling of the sample to reduce the viscosity. We validated the platform by preparing and analyzing mesophases of lipid monoolein (MO) mixed with aqueous solutions of different concentrations of β-octylglucoside (βOG), a detergent frequently used in membrane protein crystallization. Four samples were prepared in parallel on chip, by first metering and automatically diluting βOG to obtain detergent solutions of different concentration, then metering MO, and finally mixing by actuation of pneumatic valves. Integration of detergent dilution and subsequent mixing significantly reduced the number of manual steps needed for sample preparation. Three different types of mesophases typical for monoolein were successfully identified in SAXS data from on-chip samples. Microstructural parameters of identical samples formulated in different chips showed excellent agreement. Phase behavior observed on-chip corresponded well with that of samples prepared via the traditional coupled-syringe method (“off-chip”) using 300-fold larger amount of material, further validating the utility of the microfluidic platform for on-chip characterization of mesophase behavior. PMID:23882463
NASA Astrophysics Data System (ADS)
Parry, James Roswell
Fission track analysis (FTA) has many uses in the scientific community including but not limited to geological dating, neutron flux mapping, and dose reconstruction. The common method of fission for FTA is through neutrons from a nuclear reactor. This dissertation investigates the use of bremsstrahlung radiation produced from an electron linear accelerator to induce fission in FTA samples. This provides a means of simultaneously measuring the amount of Pu-239, U-nat, and Th-232 in a single sample. The benefit of measuring the three isotopes simultaneously is the possible elimination of costly and time consuming chemical processing for dose reconstruction samples. Samples containing the three isotopes were irradiated in two different bremsstrahlung spectra and a neutron spectrum to determine the amount of Pu-239, U-nat, and Th-232 in the samples. The reaction rate from the calibration samples and the counted fission tracks on the samples were used in determining the concentration of each isotope in the samples. The results were accurate to within a factor of two or three, showing that the method can work to predict the concentrations of multiple isotopes in a sample. The limitations of current accelerators and detectors limits the application of this specific procedure to higher concentrations of isotopes. The method detection limits for Pu-239, U-nat, and Th-232 are 20 pCi, 1 fCi, and 0.4 flCI respectively. Analysis of extremely low concentrations of isotopes would require the use of different detectors such as quartz due to the embrittlement encountered in the Lexan at high exposures. Cracking of the Texan detectors started to appear at a fluence of about 2 x 1018 electrons from the accelerator. This may be partly due to the beam stop not being an adequate thickness. The procedure is likely limited to specialty applications for the near term. However, with the world concerns of exposure to depleted uranium, this procedure may find applications in this area since it would be simple to adapt the procedure to depleted uranium detection.
13 CFR 120.847 - Requirements for the Loan Loss Reserve Fund (LLRF).
Code of Federal Regulations, 2011 CFR
2011-01-01
... by all parties in a timely fashion, and that all required deposits are made. (b) PCLP CDC Exposure... establish and maintain an LLRF equal to one percent of the original principal amount (the face amount) of...
13 CFR 120.847 - Requirements for the Loan Loss Reserve Fund (LLRF).
Code of Federal Regulations, 2010 CFR
2010-01-01
... by all parties in a timely fashion, and that all required deposits are made. (b) PCLP CDC Exposure... establish and maintain an LLRF equal to one percent of the original principal amount (the face amount) of...
Cover crops in vegetable production systems
USDA-ARS?s Scientific Manuscript database
Current vegetable production systems require an intensive amount Current vegetable production systems require an intensive amount of work and inputs, and if not properly managed could have detrimental effects on soil and the environment. Practices such as intensive tillage, increased herbicide use, ...
33 CFR 135.203 - Amount required.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false Amount required. 135.203 Section 135.203 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE POLLUTION FINANCIAL RESPONSIBILITY AND COMPENSATION OFFSHORE OIL POLLUTION COMPENSATION FUND...
33 CFR 135.203 - Amount required.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false Amount required. 135.203 Section 135.203 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE POLLUTION FINANCIAL RESPONSIBILITY AND COMPENSATION OFFSHORE OIL POLLUTION COMPENSATION FUND...
33 CFR 135.203 - Amount required.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Amount required. 135.203 Section 135.203 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE POLLUTION FINANCIAL RESPONSIBILITY AND COMPENSATION OFFSHORE OIL POLLUTION COMPENSATION FUND...
33 CFR 135.203 - Amount required.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false Amount required. 135.203 Section 135.203 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE POLLUTION FINANCIAL RESPONSIBILITY AND COMPENSATION OFFSHORE OIL POLLUTION COMPENSATION FUND...
33 CFR 135.203 - Amount required.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false Amount required. 135.203 Section 135.203 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE POLLUTION FINANCIAL RESPONSIBILITY AND COMPENSATION OFFSHORE OIL POLLUTION COMPENSATION FUND...
Reiser, Vladimír; Smith, Ryan C; Xue, Jiyan; Kurtz, Marc M; Liu, Rong; Legrand, Cheryl; He, Xuanmin; Yu, Xiang; Wong, Peggy; Hinchcliffe, John S; Tanen, Michael R; Lazar, Gloria; Zieba, Renata; Ichetovkin, Marina; Chen, Zhu; O'Neill, Edward A; Tanaka, Wesley K; Marton, Matthew J; Liao, Jason; Morris, Mark; Hailman, Eric; Tokiwa, George Y; Plump, Andrew S
2011-11-01
With expanding biomarker discovery efforts and increasing costs of drug development, it is critical to maximize the value of mass-limited clinical samples. The main limitation of available methods is the inability to isolate and analyze, from a single sample, molecules requiring incompatible extraction methods. Thus, we developed a novel semiautomated method for tissue processing and tissue milling and division (TMAD). We used a SilverHawk atherectomy catheter to collect atherosclerotic plaques from patients requiring peripheral atherectomy. Tissue preservation by flash freezing was compared with immersion in RNAlater®, and tissue grinding by traditional mortar and pestle was compared with TMAD. Comparators were protein, RNA, and lipid yield and quality. Reproducibility of analyte yield from aliquots of the same tissue sample processed by TMAD was also measured. The quantity and quality of biomarkers extracted from tissue prepared by TMAD was at least as good as that extracted from tissue stored and prepared by traditional means. TMAD enabled parallel analysis of gene expression (quantitative reverse-transcription PCR, microarray), protein composition (ELISA), and lipid content (biochemical assay) from as little as 20 mg of tissue. The mean correlation was r = 0.97 in molecular composition (RNA, protein, or lipid) between aliquots of individual samples generated by TMAD. We also demonstrated that it is feasible to use TMAD in a large-scale clinical study setting. The TMAD methodology described here enables semiautomated, high-throughput sampling of small amounts of heterogeneous tissue specimens by multiple analytical techniques with generally improved quality of recovered biomolecules.
Yang, Qi; Franco, Christopher M M; Zhang, Wei
2015-10-01
Experiments were designed to validate the two common DNA extraction protocols (CTAB-based method and DNeasy Blood & Tissue Kit) used to effectively recover actinobacterial DNA from sponge samples in order to study the sponge-associated actinobacterial diversity. This was done by artificially spiking sponge samples with actinobacteria (spores, mycelia and a combination of the two). Our results demonstrated that both DNA extraction methods were effective in obtaining DNA from the sponge samples as well as the sponge samples spiked with different amounts of actinobacteria. However, it was noted that in the presence of the sponge, the bacterial 16S rRNA gene could not be amplified unless the combined DNA template was diluted. To test the hypothesis that the extracted sponge DNA contained inhibitors, dilutions of the DNA extracts were tested for six sponge species representing five orders. The results suggested that the inhibitors were co-extracted with the sponge DNA, and a high dilution of this DNA was required for the successful PCR amplification for most of the samples. The optimized PCR conditions, including primer selection, PCR reaction system and program optimization, further improved the PCR performance. However, no single PCR condition was found to be suitable for the diverse sponge samples using various primer sets. These results highlight for the first time that the DNA extraction methods used are effective in obtaining actinobacterial DNA and that the presence of inhibitors in the sponge DNA requires high dilution coupled with fine tuning of the PCR conditions to achieve success in the study of sponge-associated actinobacterial diversity.
NASA Astrophysics Data System (ADS)
Hritz, Andrew D.; Raymond, Timothy M.; Dutcher, Dabrina D.
2016-08-01
Accurate estimates of particle surface tension are required for models concerning atmospheric aerosol nucleation and activation. However, it is difficult to collect the volumes of atmospheric aerosol required by typical instruments that measure surface tension, such as goniometers or Wilhelmy plates. In this work, a method that measures, ex situ, the surface tension of collected liquid nanoparticles using atomic force microscopy is presented. A film of particles is collected via impaction and is probed using nanoneedle tips with the atomic force microscope. This micro-Wilhelmy method allows for direct measurements of the surface tension of small amounts of sample. This method was verified using liquids, whose surface tensions were known. Particles of ozone oxidized α-pinene, a well-characterized system, were then produced, collected, and analyzed using this method to demonstrate its applicability for liquid aerosol samples. It was determined that oxidized α-pinene particles formed in dry conditions have a surface tension similar to that of pure α-pinene, and oxidized α-pinene particles formed in more humid conditions have a surface tension that is significantly higher.
Costs of measuring leaf area index of corn
NASA Technical Reports Server (NTRS)
Daughtry, C. S. T.; Hollinger, S. E.
1984-01-01
The magnitude of plant-to-plant variability of leaf area of corn plants selected from uniform plots was examined and four representative methods for measuring leaf area index (LAI) were evaluated. The number of plants required and the relative costs for each sampling method were calculated to detect 10, 20, and 50% differences in LAI using 0.05 and 0.01 tests of significance and a 90% probability of success (beta = 0.1). The natural variability of leaf area per corn plant was nearly 10%. Additional variability or experimental error may be introduced by the measurement technique employed and by nonuniformity within the plot. Direct measurement of leaf area with an electronic area meter had the lowest CV, required that the fewest plants be sampled, but required approximately the same amount of time as the leaf area/weight ratio method to detect comparable differences. Indirect methods based on measurements of length and width of leaves required more plants but less total time than the direct method. Unless the coefficients for converting length and width to area are verified frequently, the indirect methods may be biased. When true differences in LAI among treatments exceed 50% of mean, all four methods are equal. The method of choice depends on the resources available, the differences to be detected, and what additional information, such as leaf weight or stalk weight, is also desired.
Stankiewicz, B.A.; Kruge, M.A.; Crelling, J.C.; Salmon, G.L.
1994-01-01
Samples of organic matter from nine well-known geological units (Green River Fm., Tasmanian Tasmanite, Lower Toarcian Sh. of the Paris Basin, Duwi Fm., New Albany Sh., Monterey Fm., Herrin No. 6 coal, Eocene coal, and Miocene lignite from Kalimantan) were processed by density gradient centrifugation (DGC) to isolate the constituent macerals. Optimal separation, as well as the liberation of microcrystalline pyrite from the organic matter, was obtained by particle size minimization prior to DGC by treatment with liquid N2 and micronization in a fluid energy mill. The resulting small particle size limits the use of optical microscopy, thus microfluorimetry and analytical pyrolysis were also employed to assess the quality and purity of the fractions. Each of the samples exhibits one dominant DGC peak (corresponding to alginite in the Green River Fm., amorphinite in the Lower Toarcian Sh., vitrinite in the Herrin No. 6, etc.) which shifts from 1.05 g mL-1 for the Type I kerogens to between 1.18 and 1.23 g mL-1 for Type II and II-S. The characteristic densities for Type III organic matter are greater still, being 1.27 g mL-1 for the hydrogen-rich Eocene coal, 1.29 g mL-1 for the Carboniferous coal and 1.43 g mL-1 for the oxygen-rich Miocene lignite. Among Type II kerogens, the DGC profile represents a compositional continuum from undegraded alginite through (bacterial) degraded amorphinite; therefore chemical and optical properties change gradually with increasing density. The separation of useful quantities of macerals that occur in only minor amounts is difficult. Such separations require large amounts of starting material and require multiple processing steps. Complete maceral separation for some samples using present methods seems remote. Samples containing macerals with significant density differences due to heteroatom diversity (e.g., preferential sulfur or oxygen concentration in the one maceral), on the other hand, may be successfully separated (e.g., coals and Monterey kerogen). ?? 1994 American Chemical Society.
SoilJ - An ImageJ plugin for semi-automatized image-processing of 3-D X-ray images of soil columns
NASA Astrophysics Data System (ADS)
Koestel, John
2016-04-01
3-D X-ray imaging is a formidable tool for quantifying soil structural properties which are known to be extremely diverse. This diversity necessitates the collection of large sample sizes for adequately representing the spatial variability of soil structure at a specific sampling site. One important bottleneck of using X-ray imaging is however the large amount of time required by a trained specialist to process the image data which makes it difficult to process larger amounts of samples. The software SoilJ aims at removing this bottleneck by automatizing most of the required image processing steps needed to analyze image data of cylindrical soil columns. SoilJ is a plugin of the free Java-based image-processing software ImageJ. The plugin is designed to automatically process all images located with a designated folder. In a first step, SoilJ recognizes the outlines of the soil column upon which the column is rotated to an upright position and placed in the center of the canvas. Excess canvas is removed from the images. Then, SoilJ samples the grey values of the column material as well as the surrounding air in Z-direction. Assuming that the column material (mostly PVC of aluminium) exhibits a spatially constant density, these grey values serve as a proxy for the image illumination at a specific Z-coordinate. Together with the grey values of the air they are used to correct image illumination fluctuations which often occur along the axis of rotation during image acquisition. SoilJ includes also an algorithm for beam-hardening artefact removal and extended image segmentation options. Finally, SoilJ integrates the morphology analyses plugins of BoneJ (Doube et al., 2006, BoneJ Free and extensible bone image analysis in ImageJ. Bone 47: 1076-1079) and provides an ASCII file summarizing these measures for each investigated soil column, respectively. In the future it is planned to integrate SoilJ into FIJI, the maintained and updated edition of ImageJ with selected plugins.
Karimi, H; Ghaedi, M; Shokrollahi, A; Rajabi, H R; Soylak, M; Karami, B
2008-02-28
A simple, selective and rapid flotation method for the separation-preconcentration of trace amounts of cobalt, nickel, iron and copper ions using phenyl 2-pyridyl ketone oxime (PPKO) has been developed prior to their flame atomic absorption spectrometric determinations. The influence of pH, amount of PPKO as collector, type and amount of eluting agent, type and amount of surfactant as floating agent and ionic strength was evaluated on the recoveries of analytes. The influences of the concomitant ions on the recoveries of the analyte ions were also examined. The enrichment factor was 93. The detection limits based on 3 sigma for Cu, Ni, Co and Fe were 0.7, 0.7, 0.8, and 0.7 ng mL(-1), respectively. The method has been successfully applied for determination of trace amounts of ions in various real samples.
Device for quickly sensing the amount of O2 in a combustion product gas
NASA Technical Reports Server (NTRS)
Singh, Jag J. (Inventor); Davis, William T. (Inventor); Puster, Richard L. (Inventor)
1990-01-01
A sensing device comprising an O2 sensor, a pump, a compressor, and a heater is provided to quickly sense the amount of O2 in a combustion product gas. A sample of the combustion product gas is compressed to a pressure slightly above one atmosphere by the compressor. Next, the heater heats the sample between 800 C and 900 C. Next, the pump causes the sample to be flushed against the electrode located in O2 sensor 6000 to 10,000 times per second. Reference air at approximately one atmosphere is provided to the electrode of O2 sensor. Accordingly, the O2 sensor produces a voltage which is proportional to the amount of oxygen in the combustion product gas. This voltage may be used to control the amount of O2 entering into the combustion chamber which produces the combustion product gas.
Goodman, Michael; Dana Flanders, W
2007-04-01
We compare methodological approaches for evaluating gene-environment interaction using a planned study of pediatric leukemia as a practical example. We considered three design options: a full case-control study (Option I), a case-only study (Option II), and a partial case-control study (Option III), in which information on controls is limited to environmental exposure only. For each design option we determined its ability to measure the main effects of environmental factor E and genetic factor G, and the interaction between E and G. Using the leukemia study example, we calculated sample sizes required to detect and odds ratio (OR) of 2.0 for E alone, an OR of 10 for G alone and an interaction G x E of 3. Option I allows measuring both main effects and interaction, but requires a total sample size of 1,500 cases and 1,500 controls. Option II allows measuring only interaction, but requires just 121 cases. Option III allows calculating the main effect of E, and interaction, but not the main effect of G, and requires a total of 156 cases and 133 controls. In this case, the partial case-control study (Option III) appears to be more efficient with respect to its ability to answer the research questions for the amount of resources required. The design options considered in this example are not limited to observational epidemiology and may be applicable in studies of pharmacogenomics, survivorship, and other areas of pediatric ALL research.
Multiresponse imaging system design for improved resolution
NASA Technical Reports Server (NTRS)
Alter-Gartenberg, Rachel; Fales, Carl L.; Huck, Friedrich O.; Rahman, Zia-Ur; Reichenbach, Stephen E.
1991-01-01
Multiresponse imaging is a process that acquires A images, each with a different optical response, and reassembles them into a single image with an improved resolution that can approach 1/sq rt A times the photodetector-array sampling lattice. Our goals are to optimize the performance of this process in terms of the resolution and fidelity of the restored image and to assess the amount of information required to do so. The theoretical approach is based on the extension of both image restoration and rate-distortion theories from their traditional realm of signal processing to image processing which includes image gathering and display.
Occupational aspirations of black South African adolescents.
Watson, M B; Foxcroft, C D; Horn, M A; Stead, G B
1997-04-01
The present study provides a description of the occupational aspirations of 216 black high school students in a special program by the amount of training required (status) and Holland's 1973 typology as well as by gender, age, socioeconomic status, knowledge of self, and occupational knowledge. Analysis indicates that most adolescents aspire to Social and Investigative occupations, and occupations with a high status. Most of this select sample displayed low self- and occupational knowledge. Aspirations appear unrealistic in terms of trends within the labor market, but might be more realistic with effective and relevant guidance programs in schools.
Piper, D.Z.; Isaacs, C.M.
1995-01-01
Approximately 24 samples of the Monterey Formation, Southern California, have been analyzed for their major-element oxide and minor-element content. These analyses allow identification of a detrital fraction, composed of terrigenous quartz, clay minerals, and other Al silicate minerals, and a marine fraction, composed of biogenic silica, calcite, dolomite, organic matter, apatite, and minor amounts of pyrite. The minor-element contents in the marine fraction alone are interpreted to have required, at the time of deposition, a high level of primary productivity in the photic zone and denitrifying bacterial respiration in the bottom water.
Optical free piston cell with constant diameter for use under high pressure
NASA Astrophysics Data System (ADS)
Ishihara, Koji; Takagi, Masahiro
1994-02-01
An optical free piston cell (a modified le Noble and Schlott type optical cell) is described for use in spectrophotometric study under high pressure. The cell consists of a disk, a cylinder, and a free piston, which are made of quartz and are mounted within a stainless-steel holder. A small amount of sample solution (˜0.6 cm3), which only contacts with quartz, is required for measurements. The path length is fixed (1.2 cm) at ambient pressure, but is self-adjusting at elevated pressure so that no compressibility corrections are necessary.
NASA Technical Reports Server (NTRS)
Hejduk, M. D.; Cowardin, H. M.; Stansbery, Eugene G.
2012-01-01
In performing debris surveys of deep-space orbital regions, the considerable volume of the area to be surveyed and the increased orbital altitude suggest optical telescopes as the most efficient survey instruments; but to proceed this way, methodologies for debris object size estimation using only optical tracking and photometric information are needed. Basic photometry theory indicates that size estimation should be possible if satellite albedo and shape are known. One method for estimating albedo is to try to determine the object's material type photometrically, as one can determine the albedos of common satellite materials in the laboratory. Examination of laboratory filter photometry (using Johnson BVRI filters) on a set of satellite material samples indicates that most material types can be separated at the 1-sigma level via B-R versus R-I color differences with a relatively small amount of required resampling, and objects that remain ambiguous can be resolved by B-R versus B-V color differences and solar radiation pressure differences. To estimate shape, a technique advanced by Hall et al. [1], based on phase-brightness density curves and not requiring any a priori knowledge of attitude, has been modified slightly to try to make it more resistant to the specular characteristics of different materials and to reduce the number of samples necessary to make robust shape determinations. Working from a gallery of idealized debris shapes, the modified technique identifies most shapes within this gallery correctly, also with a relatively small amount of resampling. These results are, of course, based on relatively small laboratory investigations and simulated data, and expanded laboratory experimentation and further investigation with in situ survey measurements will be required in order to assess their actual efficacy under survey conditions; but these techniques show sufficient promise to justify this next level of analysis.
Resolving the Aerosol Piece of the Global Climate Picture
NASA Astrophysics Data System (ADS)
Kahn, R. A.
2017-12-01
Factors affecting our ability to calculate climate forcing and estimate model predictive skill include direct radiative effects of aerosols and their indirect effects on clouds. Several decades of Earth-observing satellite observations have produced a global aerosol column-amount (AOD) record, but an aerosol microphysical property record required for climate and many air quality applications is lacking. Surface-based photometers offer qualitative aerosol-type classification, and several space-based instruments map aerosol air-mass types under favorable conditions. However, aerosol hygroscopicity, mass extinction efficiency (MEE), and quantitative light absorption, must be obtained from in situ measurements. Completing the aerosol piece of the climate picture requires three elements: (1) continuing global AOD and qualitative type mapping from space-based, multi-angle imagers and aerosol vertical distribution from near-source stereo imaging and downwind lidar, (2) systematic, quantitative in situ observations of particle properties unobtainable from space, and (3) continuing transport modeling to connect observations to sources, and extrapolate limited sampling in space and time. At present, the biggest challenges to producing the needed aerosol data record are: filling gaps in particle property observations, maintaining global observing capabilities, and putting the pieces together. Obtaining the PDFs of key particle properties, adequately sampled, is now the leading observational deficiency. One simplifying factor is that, for a given aerosol source and season, aerosol amounts often vary, but particle properties tend to be repeatable. SAM-CAAM (Systematic Aircraft Measurements to Characterize Aerosol Air Masses), a modest aircraft payload deployed frequently could fill this gap, adding value to the entire satellite data record, improving aerosol property assumptions in retrieval algorithms, and providing MEEs to translate between remote-sensing optical constraints and aerosol mass book-kept in climate models [Kahn et al., BAMS 2017]. This will also improve connections between remote-sensing particle types and those defined in models. The third challenge, maintaining global observing capabilities, requires continued community effort and good budgetary fortune.
Forgery at the Snite Museum of Art? Improving AMS Radiocarbon Dating at the University of Notre Dame
NASA Astrophysics Data System (ADS)
Troyer, Laura; Bagwell, Connor; Anderson, Tyler; Clark, Adam; Nelson, Austin; Skulski, Michael; Collon, Philippe
2017-09-01
The Snite Museum of Art recently obtained several donations of artifacts. Five of the pieces lack sufficient background information to prove authenticity and require further analysis to positively determine the artwork's age. One method to determine the artwork's age is radiocarbon dating via Accelerator Mass Spectrometry (AMS) performed at the University of Notre Dame's Nuclear Science Laboratory. Samples are prepared by combustion of a small amount of material and subsequent reduction to carbon into an iron powder matrix (graphitization). The graphitization procedure affects the maximum measurement rate, and a poor graphitization can be detrimental to the AMS measurement of the sample. Previous graphitization procedures resulted in a particle current too low or inconsistent to optimize AMS measurements. Thus, there was a desire to design and refine the graphitization system. The finalized process yielded physically darker samples and increased sample currents by two orders of magnitude. Additionally, the first testing of the samples was successful, yet analysis of the dates proved inconclusive. AMS measurements will be performed again to obtain better sampling statistics in the hopes of narrowing the reported date ranges. NSF and JINA-CEE.
Tang, Gang; Hou, Wei; Wang, Huaqing; Luo, Ganggang; Ma, Jianwei
2015-01-01
The Shannon sampling principle requires substantial amounts of data to ensure the accuracy of on-line monitoring of roller bearing fault signals. Challenges are often encountered as a result of the cumbersome data monitoring, thus a novel method focused on compressed vibration signals for detecting roller bearing faults is developed in this study. Considering that harmonics often represent the fault characteristic frequencies in vibration signals, a compressive sensing frame of characteristic harmonics is proposed to detect bearing faults. A compressed vibration signal is first acquired from a sensing matrix with information preserved through a well-designed sampling strategy. A reconstruction process of the under-sampled vibration signal is then pursued as attempts are conducted to detect the characteristic harmonics from sparse measurements through a compressive matching pursuit strategy. In the proposed method bearing fault features depend on the existence of characteristic harmonics, as typically detected directly from compressed data far before reconstruction completion. The process of sampling and detection may then be performed simultaneously without complete recovery of the under-sampled signals. The effectiveness of the proposed method is validated by simulations and experiments. PMID:26473858
Feist, Peter; Hummon, Amanda B.
2015-01-01
Proteins regulate many cellular functions and analyzing the presence and abundance of proteins in biological samples are central focuses in proteomics. The discovery and validation of biomarkers, pathways, and drug targets for various diseases can be accomplished using mass spectrometry-based proteomics. However, with mass-limited samples like tumor biopsies, it can be challenging to obtain sufficient amounts of proteins to generate high-quality mass spectrometric data. Techniques developed for macroscale quantities recover sufficient amounts of protein from milligram quantities of starting material, but sample losses become crippling with these techniques when only microgram amounts of material are available. To combat this challenge, proteomicists have developed micro-scale techniques that are compatible with decreased sample size (100 μg or lower) and still enable excellent proteome coverage. Extraction, contaminant removal, protein quantitation, and sample handling techniques for the microgram protein range are reviewed here, with an emphasis on liquid chromatography and bottom-up mass spectrometry-compatible techniques. Also, a range of biological specimens, including mammalian tissues and model cell culture systems, are discussed. PMID:25664860
Design of a practical model-observer-based image quality assessment method for CT imaging systems
NASA Astrophysics Data System (ADS)
Tseng, Hsin-Wu; Fan, Jiahua; Cao, Guangzhi; Kupinski, Matthew A.; Sainath, Paavana
2014-03-01
The channelized Hotelling observer (CHO) is a powerful method for quantitative image quality evaluations of CT systems and their image reconstruction algorithms. It has recently been used to validate the dose reduction capability of iterative image-reconstruction algorithms implemented on CT imaging systems. The use of the CHO for routine and frequent system evaluations is desirable both for quality assurance evaluations as well as further system optimizations. The use of channels substantially reduces the amount of data required to achieve accurate estimates of observer performance. However, the number of scans required is still large even with the use of channels. This work explores different data reduction schemes and designs a new approach that requires only a few CT scans of a phantom. For this work, the leave-one-out likelihood (LOOL) method developed by Hoffbeck and Landgrebe is studied as an efficient method of estimating the covariance matrices needed to compute CHO performance. Three different kinds of approaches are included in the study: a conventional CHO estimation technique with a large sample size, a conventional technique with fewer samples, and the new LOOL-based approach with fewer samples. The mean value and standard deviation of area under ROC curve (AUC) is estimated by shuffle method. Both simulation and real data results indicate that an 80% data reduction can be achieved without loss of accuracy. This data reduction makes the proposed approach a practical tool for routine CT system assessment.
Code of Federal Regulations, 2011 CFR
2011-10-01
...-Federal cost-share required for a feasibility study? 404.34 Section 404.34 Public Lands: Interior... for a feasibility study? Yes. Reclamation may reduce the non-Federal cost-share required for a feasibility study to an amount less than 50 percent of the study costs if: (a) Reclamation determines that...
Effect of reproductive rate on minimum habitat requirements of forest-breeding birds
Melissa D. Vance; Lenore Fahrig; Curtis H. Flather
2003-01-01
A major challenge facing conservation biologists and wildlife managers is to predict how fauna will respond to habitat loss. Different species require different amounts of habitat for population persistence, and speciesâ reproductive rates have been identified as one of the major factors affecting these habitat-amount requirements. The purpose of this study was to test...
Library Construction from Subnanogram DNA for Pelagic Sea Water and Deep-Sea Sediments
Hirai, Miho; Nishi, Shinro; Tsuda, Miwako; Sunamura, Michinari; Takaki, Yoshihiro; Nunoura, Takuro
2017-01-01
Shotgun metagenomics is a low biased technology for assessing environmental microbial diversity and function. However, the requirement for a sufficient amount of DNA and the contamination of inhibitors in environmental DNA leads to difficulties in constructing a shotgun metagenomic library. We herein examined metagenomic library construction from subnanogram amounts of input environmental DNA from subarctic surface water and deep-sea sediments using two library construction kits: the KAPA Hyper Prep Kit and Nextera XT DNA Library Preparation Kit, with several modifications. The influence of chemical contaminants associated with these environmental DNA samples on library construction was also investigated. Overall, shotgun metagenomic libraries were constructed from 1 pg to 1 ng of input DNA using both kits without harsh library microbial contamination. However, the libraries constructed from 1 pg of input DNA exhibited larger biases in GC contents, k-mers, or small subunit (SSU) rRNA gene compositions than those constructed from 10 pg to 1 ng DNA. The lower limit of input DNA for low biased library construction in this study was 10 pg. Moreover, we revealed that technology-dependent biases (physical fragmentation and linker ligation vs. tagmentation) were larger than those due to the amount of input DNA. PMID:29187708
Anthropometry as a predictor of vertical jump heights derived from an instrumented platform.
Caruso, John F; Daily, Jeremy S; Mason, Melissa L; Shepherd, Catherine M; McLagan, Jessica R; Marshall, Mallory R; Walker, Ron H; West, Jason O
2012-01-01
The current study purpose examined the vertical height-anthropometry relationship with jump data obtained from an instrumented platform. Our methods required college-aged (n = 177) subjects to make 3 visits to our laboratory to measure the following anthropometric variables: height, body mass, upper arm length (UAL), lower arm length, upper leg length, and lower leg length. Per jump, maximum height was measured in 3 ways: from the subjects' takeoff, hang times, and as they landed on the platform. Standard multivariate regression assessed how well anthropometry predicted the criterion variance per gender (men, women, pooled) and jump height method (takeoff, hang time, landing) combination. Z-scores indicated that small amounts of the total data were outliers. The results showed that the majority of outliers were from jump heights calculated as women landed on the platform. With the genders pooled, anthropometry predicted a significant (p < 0.05) amount of variance from jump heights calculated from both takeoff and hang time. The anthropometry-vertical jump relationship was not significant from heights calculated as subjects landed on the platform, likely due to the female outliers. Yet anthropometric data of men did predict a significant amount of variance from heights calculated when they landed on the platform; univariate correlations of men's data revealed that UAL was the best predictor. It was concluded that the large sample of men's data led to greater data heterogeneity and a higher univariate correlation. Because of our sample size and data heterogeneity, practical applications suggest that coaches may find our results best predict performance for a variety of college-aged athletes and vertical jump enthusiasts.
Vasylechko, Volodymyr O; Gryshchouk, Galyna V; Zakordonskiy, Victor P; Vyviurska, Olga; Pashuk, Andriy V
2015-01-01
In spite of the fact that terbium is one of the rarest elements in the Earth's crust, it is frequently used for the production of high technological materials. At the result, an effective combination of sample preparation procedure and detection method for terbium ions in different matrices is highly required. The solid-phase extraction procedure with natural Transcarpathian clinoptilolite thermally activated at 350 °C was used to preconcentrate trace amounts of terbium ions in aqueous solutions for a final spectrophotometric determination with arsenazo III. Thermogravimetric investigations confirmed the existence of relations between changes that appeared during dehydratation of calcined zeolite and its sorption affinity. Since the maximum of sorption capacity towards terbium was observed at pH 8.25, a borate buffer medium (2.5 · 10(-4) М) was used to maintain ionic force and solution acidity. Terbium was quantitatively removed from the solid-phase extraction column with a 1.0 M solution of sodium chloride (pH 2.5). The linearity of the proposed method was evaluated in the range of 2.5-200 ng · mL(-1) with detection limit 0.75 ng · mL(-1). Due to acceptable recoveries (93.3-102.0 %) and RSD values (6-7.1) from spiked tap water, the developed method can be successfully applied for the determination of trace amounts of terbium ions in the presence of major components of water. Graphical abstractSorption of terbium(III) ions on clinoptilolite.
Milbury, Coren A.; Chen, Clark C.; Mamon, Harvey; Liu, Pingfang; Santagata, Sandro; Makrigiorgos, G. Mike
2011-01-01
Thorough screening of cancer-specific biomarkers, such as DNA mutations, can require large amounts of genomic material; however, the amount of genomic material obtained from some specimens (such as biopsies, fine-needle aspirations, circulating-DNA or tumor cells, and histological slides) may limit the analyses that can be performed. Furthermore, mutant alleles may be at low-abundance relative to wild-type DNA, reducing detection ability. We present a multiplex-PCR approach tailored to amplify targets of interest from small amounts of precious specimens, for extensive downstream detection of low-abundance alleles. Using 3 ng of DNA (1000 genome-equivalents), we amplified the 1 coding exons (2-11) of TP53 via multiplex-PCR. Following multiplex-PCR, we performed COLD-PCR (co-amplification of major and minor alleles at lower denaturation temperature) to enrich low-abundance variants and high resolution melting (HRM) to screen for aberrant melting profiles. Mutation-positive samples were sequenced. Evaluation of mutation-containing dilutions revealed improved sensitivities after COLD-PCR over conventional-PCR. COLD-PCR improved HRM sensitivity by approximately threefold to sixfold. Similarly, COLD-PCR improved mutation identification in sequence-chromatograms over conventional PCR. In clinical specimens, eight mutations were detected via conventional-PCR-HRM, whereas 12 were detected by COLD-PCR-HRM, yielding a 33% improvement in mutation detection. In summary, we demonstrate an efficient approach to increase screening capabilities from limited DNA material via multiplex-PCR and improve mutation detection sensitivity via COLD-PCR amplification. PMID:21354058
Code of Federal Regulations, 2013 CFR
2013-07-01
... and individual forms of bonds so long as the amount of the bond penalty is sufficient to meet the... bond must allow for recovery by each plan in an amount at least equal to that which would be required... by the terms of the bond or rider to the bond or by separate agreement among the parties concerned...
Code of Federal Regulations, 2014 CFR
2014-07-01
... and individual forms of bonds so long as the amount of the bond penalty is sufficient to meet the... bond must allow for recovery by each plan in an amount at least equal to that which would be required... by the terms of the bond or rider to the bond or by separate agreement among the parties concerned...
Code of Federal Regulations, 2010 CFR
2010-07-01
... and individual forms of bonds so long as the amount of the bond penalty is sufficient to meet the... bond must allow for recovery by each plan in an amount at least equal to that which would be required... by the terms of the bond or rider to the bond or by separate agreement among the parties concerned...
Code of Federal Regulations, 2011 CFR
2011-07-01
... and individual forms of bonds so long as the amount of the bond penalty is sufficient to meet the... bond must allow for recovery by each plan in an amount at least equal to that which would be required... by the terms of the bond or rider to the bond or by separate agreement among the parties concerned...
Code of Federal Regulations, 2012 CFR
2012-07-01
... and individual forms of bonds so long as the amount of the bond penalty is sufficient to meet the... bond must allow for recovery by each plan in an amount at least equal to that which would be required... by the terms of the bond or rider to the bond or by separate agreement among the parties concerned...
Determination of a Screening Metric for High Diversity DNA Libraries.
Guido, Nicholas J; Handerson, Steven; Joseph, Elaine M; Leake, Devin; Kung, Li A
2016-01-01
The fields of antibody engineering, enzyme optimization and pathway construction rely increasingly on screening complex variant DNA libraries. These highly diverse libraries allow researchers to sample a maximized sequence space; and therefore, more rapidly identify proteins with significantly improved activity. The current state of the art in synthetic biology allows for libraries with billions of variants, pushing the limits of researchers' ability to qualify libraries for screening by measuring the traditional quality metrics of fidelity and diversity of variants. Instead, when screening variant libraries, researchers typically use a generic, and often insufficient, oversampling rate based on a common rule-of-thumb. We have developed methods to calculate a library-specific oversampling metric, based on fidelity, diversity, and representation of variants, which informs researchers, prior to screening the library, of the amount of oversampling required to ensure that the desired fraction of variant molecules will be sampled. To derive this oversampling metric, we developed a novel alignment tool to efficiently measure frequency counts of individual nucleotide variant positions using next-generation sequencing data. Next, we apply a method based on the "coupon collector" probability theory to construct a curve of upper bound estimates of the sampling size required for any desired variant coverage. The calculated oversampling metric will guide researchers to maximize their efficiency in using highly variant libraries.
Nickel and chromium levels in the saliva of patients with fixed orthodontic appliances.
Yassaei, Soghra; Dadfarnia, Shayesta; Ahadian, Hakima; Moradi, Farshad
2013-01-01
The purpose of this study was to investigate the salivary concentration of nickel and chromium of patients undergoing orthodontic treatment. In this study 32 patients who presented to the orthodontic clinic were selected. The salivary samples were taken from the patients in four stages: before appliance placement and 20 days, 3 months, and 6 months following appliance placement. The salivary samples were collected in a plastic tube and were stored in the freezer before analysis. The samples were then transferred to the laboratory, and the amounts of metals were determined by graphite furnace atomic absorption spectrometry with an autosampler. Each sample was analyzed three times, and the average was reported. It was found that the average amount of nickel in the saliva 20 days after appliance placement was 0.8 μg/L more than before placement. Also, the amount of salivary nickel 20 days after the appliance placement was more than at the other stages, but the differences were not significant. The average amount of chromium in the saliva was found to be between 2.6 and 3.6 μg/L. The amount of chromium at all stages after appliance placement was more than before, but the differences between the chromium levels of saliva at all stages were not significant. There was no significant difference in the average amount of salivary nickel and chromium of patients at various stages of orthodontic appliance placement.
Organophosphorous residue in Liza aurata and Cyprinus carpio
Shayeghi, Mansoreh; Khoobdel, Mehdi; Bagheri, Fatemeh; Abtahi, Mohammad; Zeraati, Hojjatollah
2012-01-01
Objective To determine the amount of azinphos methyl and diazinon residues in two river fishes, Liza aurata and Cyprinus carpio, in the north of Iran. Methods This study was done during 2006-2007. In this survey, 152 water and fish samples from Gorgan and Qarasu rivers, north of Iran, were investigated. Sampling was done in three predetermined stations along each river. Organophosphorus compounds (OPs) were extracted from the fishes and the water of rivers. After extraction, purification and concentration processes, the amount and type of insecticides in water and fish samples were determined by high performance thin layer chromatography (HPTLC). Results There was a significant difference in the residue of the insecticides in the water and fish samples between summer and other seasons in the two rivers. The highest amount of insecticides residue was seen during summer. In both rivers, the amount of diazinon and azinphos methyl residues in the two fishes was more than 2 000 mg/L in summer. There was no significant difference in insecticides residue between the fishes in two rivers. The diazinon residue was higher than the standard limits in both rivers during the spring and the summer, but the residual amount of azinphos methyl was higher than the standard limits only during the summer and only in Qarasu River. Conclusions It can be concluded that the amount of OPs in the water and the two fishes, Liza aurata and Cyprinus carpio, is higher than the permitted levels. PMID:23569972
Issues in the analyze of low content gold mining samples by fire assay technique
NASA Astrophysics Data System (ADS)
Cetean, Valentina
2016-04-01
The classic technique analyze of samples with low gold content - below 0.1 g/t (=100 ppb = parts per billion), either ore or gold sediments, involves the preparation of sample by fire assay extraction, followed by the chemical attack with aqua regia (hydrochloric and nitric acid) and measuring the gold content by atomic absorption spectrometry or inductively coupled mass spectrometry. The issues raised by this analysis are well known for the world laboratories, commercial or research ones. The author's knowledge regarding this method of determining the gold content, accumulated in such laboratory from Romania (with more than 40 years of experience, even if not longer available from 2014) confirms the obtaining of reliable results required a lot of attention, amount of work and the involving of an experienced fire assayer specialist. The analytical conclusion for a research laboratory is that most reliable and statistically valid results are till reached for samples with more than 100 ppb gold content; the degree of confidence below this value is lower than 90%. Usually, for samples below 50 ppb, it does not exceed 50-70 %, unless without very strictly control of each stage, that involve additional percentage of hours allocated for successive extracting tests and knowing more precisely the other compounds that appear in the sample (Cu, Sb, As, sulfur / sulphides, Te, organic matter, etc.) or impurities. The most important operation is the preparation, namely: - grinding and splitting of sample (which can cause uneven distribution of gold flakes in the double samples for analyzed); - pyro-metallurgical recovery of gold = fire assay stage, involving the more precise temperature control in furnace during all stages (fusion and cupellation) and adjusting of the fire assay flux components to produce a successful fusion depending of the sample matrix and content; - reducing the sample weight to decrease the amount of impurities that can be concentrated in the lead button during oxidation stage. The better metal recovery and the decreasing of the amount of errors for low gold content samples are controlled in this case by: - the management of the quantity of one or more components of the flux, depending on the chemical composition of the sample (sometimes just by observing the behavior and the visual characteristics of lead Au + Ag button/bead and the resulted slag); - addition of gold-free silver, which will be removed by chemical reduction with aqua regia after the fire assay stage. Regarding the instrumental analyze stage of the samples with less than 100 ppb gold content, there were obtained similar values by both techniques: atomic absorption and inductively coupled mass spectrometry, taking into account each of them has different detection limit. It is mandatory the quality control with a certified reference material with known content, both in the fire assay stage and the reading instrumental stage. This abstract are written in the frame of the SUSMIN project: "Tools for sustainable gold mining in EU".
Huang, Ke-Jing; Li, Jing; Liu, Yan-Ming; Wang, Lan
2013-02-01
The graphene functionalized with (3-aminopropyl) triethoxysilane was synthesized by a simple hydrothermal reaction and applied as SPE sorbents to extract trace polycyclic aromatic hydrocarbons (PAHs) from environmental water samples. These sorbents possess high adsorption capacity and extraction efficiency due to strong adsorption ability of carbon materials and large specific surface area of nanoparticles, and only 10 mg of sorbents are required to extract PAHs from 100 mL water samples. Several condition parameters, such as eluent and its volume, adsorbent amount, sample volume, sample pH, and sample flow rate, were optimized to achieve good sensitivity and precision. Under the optimized extraction conditions, the method showed good linearity in the range of 1-100 μg/L, repeatability of the extraction (the RSDs were between 1.8 and 2.9%, n = 6), and satisfactory detection limits of 0.029-0.1 μg/L. The recoveries of PAHs spiked in environmental water samples ranged from 84.6 to 109.5%. All these results demonstrated that this new SPE technique was a viable alternative to conventional enrichment techniques for the extraction and analysis of PAHs in complex samples. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Yu, Shaohui; Xiao, Xue; Ding, Hong; Xu, Ge; Li, Haixia; Liu, Jing
2017-08-05
The quantitative analysis is very difficult for the emission-excitation fluorescence spectroscopy of multi-component mixtures whose fluorescence peaks are serious overlapping. As an effective method for the quantitative analysis, partial least squares can extract the latent variables from both the independent variables and the dependent variables, so it can model for multiple correlations between variables. However, there are some factors that usually affect the prediction results of partial least squares, such as the noise, the distribution and amount of the samples in calibration set etc. This work focuses on the problems in the calibration set that are mentioned above. Firstly, the outliers in the calibration set are removed by leave-one-out cross-validation. Then, according to two different prediction requirements, the EWPLS method and the VWPLS method are proposed. The independent variables and dependent variables are weighted in the EWPLS method by the maximum error of the recovery rate and weighted in the VWPLS method by the maximum variance of the recovery rate. Three organic matters with serious overlapping excitation-emission fluorescence spectroscopy are selected for the experiments. The step adjustment parameter, the iteration number and the sample amount in the calibration set are discussed. The results show the EWPLS method and the VWPLS method are superior to the PLS method especially for the case of small samples in the calibration set. Copyright © 2017 Elsevier B.V. All rights reserved.
Loroch, Stefan; Schommartz, Tim; Brune, Wolfram; Zahedi, René Peiman; Sickmann, Albert
2015-05-01
Quantitative proteomics and phosphoproteomics have become key disciplines in understanding cellular processes. Fundamental research can be done using cell culture providing researchers with virtually infinite sample amounts. In contrast, clinical, pre-clinical and biomedical research is often restricted to minute sample amounts and requires an efficient analysis with only micrograms of protein. To address this issue, we generated a highly sensitive workflow for combined LC-MS-based quantitative proteomics and phosphoproteomics by refining an ERLIC-based 2D phosphoproteomics workflow into an ERLIC-based 3D workflow covering the global proteome as well. The resulting 3D strategy was successfully used for an in-depth quantitative analysis of both, the proteome and the phosphoproteome of murine cytomegalovirus-infected mouse fibroblasts, a model system for host cell manipulation by a virus. In a 2-plex SILAC experiment with 150 μg of a tryptic digest per condition, the 3D strategy enabled the quantification of ~75% more proteins and even ~134% more peptides compared to the 2D strategy. Additionally, we could quantify ~50% more phosphoproteins by non-phosphorylated peptides, concurrently yielding insights into changes on the levels of protein expression and phosphorylation. Beside its sensitivity, our novel three-dimensional ERLIC-strategy has the potential for semi-automated sample processing rendering it a suitable future perspective for clinical, pre-clinical and biomedical research. Copyright © 2015. Published by Elsevier B.V.
Analysis of (210)Pb in water samples with plastic scintillation resins.
Lluch, E; Barrera, J; Tarancón, A; Bagán, H; García, J F
2016-10-12
(210)Pb is a radioactive lead isotope present in the environment as member of the (238)U decay chain. Since it is a relatively long-lived radionuclide (T1/2 = 22.2 years), its analysis is of interest in radiation protection and the geochronology of sediments and artwork. Here, we present a method for analysing (210)Pb using plastic scintillation resins (PSresins) packaged in solid-phase extraction columns (SPE cartridge). The advantages of this method are its selectivity, the low limit of detection, as well as reductions in the amount of time and reagents required for analysis and the quantity of waste generated. The PSresins used in this study were composed of a selective extractant (4',4″(5″)-Di-tert-butyldicyclohexano-18-crown-6 in 1-octanol) covering the surface of plastic scintillation microspheres. Once the amount of extractant (1:1/4) and medium of separation (2 M HNO3) were optimised, PSresins in SPE cartridges were calibrated with a standard solution of (210)Pb. (210)Pb could be fully separated from its daughters, (210)Bi and (210)Po, with a recovery value of 91(3)% and detection efficiency of 44(3)%. Three spiked water samples (one underground and two river water samples) were analysed in triplicates with deviations lower than 10%, demonstrating the validity of the PS resin method for (210)Pb analysis. Copyright © 2016 Elsevier B.V. All rights reserved.
The analysis of forms of sulfur in ancient sediments and sedimentary rocks: comments and cautions
Rice, C.A.; Tuttle, M.L.; Reynolds, R.L.
1993-01-01
Assumptions commonly made during analysis of the amount of monosulfides [acid-volatile sulfides (AVS)] and disulfides in modern sediments, may not be valid for ancient sedimentary rocks. It is known that ferric iron can oxidize H2S during AVS analysis unless a reducing agent such as stannous chloride is added to the treatment. In addition, some monosulfides such as greigite and pyrrhotite require heat during the AVS analysis in order to dissolve completely. However, the use of heat and/or stannous chloride in the AVS treatment may partially dissolve disulfides and it is generally recommended that stannous chloride not be used in the AVS treatment for modern sediments. Most of the monosulfides are assumed to be recovered as AVS without the addition of stannous chloride. This study investigates the recovery of monosulfides during sulfur speciation analysis with application to ancient sedimentary rocks. Sulfur in samples containing naturally occurring greigite and mackinawite or pyrite was measured using variations of a common sulfur-speciation scheme. The sulfur-speciation scheme analyzes for monosulfide sulfur, disulfide sulfur, elemental sulfur, inorganic sulfate and organically bound sulfur. The effects of heat, stannous chloride and ferric iron on the amounts of acid-volatile sulfide and disulfide recovered during treatment for AVS were investigated. Isotopic compositions of the recovered sulfur species along with yields from an extended sulfur-speciation scheme were used to quantify the effects. Hot 6 N HCl AVS treatment recovers > 60% of the monosulfides as AVS in samples containing pure greigite and mackinawite. The remaining monosulfide sulfur is recovered in a subsequent elemental sulfur extraction. Hot 6 N HCl plus stannous chloride recovers 100% of the monosulfides as AVS. The addition of ferric iron to pure greigite and mackinawite samples during AVS treatment without stannous chloride decreased the amount of monosulfides recovered as AVS and, if present in great enough concentration, oxidized some of the AVS to a form not recovered in later treatments. The hot stannous chloride AVS treatments dissolve <5% of well-crystallized pyrite in this study. The amount of pyrite dissolved depends on grain size and crystallinity. Greigite in ancient sedimentary rocks was quantitatively recovered as AVS only with hot 6 N HCl plus stannous chloride. Hot 6 N HCl AVS treatment of these rocks did not detect any monosulfides in most samples. A subsequent elemental sulfur extraction did not completely recover the oxidized monosulfides. Therefore, the use of stannous chloride plus heat is recommended in the AVS treatment of ancient sedimentary rocks if monosulfides are present and of interest. All assumptions about the amount of monosulfides and disulfides recovered with the sulfur-speciation scheme used should be verified by extended sulfur-speciation and/or isotopic analysis of the species recovered. ?? 1993.
Statistical Analysis Techniques for Small Sample Sizes
NASA Technical Reports Server (NTRS)
Navard, S. E.
1984-01-01
The small sample sizes problem which is encountered when dealing with analysis of space-flight data is examined. Because of such a amount of data available, careful analyses are essential to extract the maximum amount of information with acceptable accuracy. Statistical analysis of small samples is described. The background material necessary for understanding statistical hypothesis testing is outlined and the various tests which can be done on small samples are explained. Emphasis is on the underlying assumptions of each test and on considerations needed to choose the most appropriate test for a given type of analysis.
Seifertová, Marta; Čechová, Eliška; Llansola, Marta; Felipo, Vicente; Vykoukalová, Martina; Kočan, Anton
2017-10-01
We developed a simple analytical method for the simultaneous determination of representatives of various groups of neurotoxic insecticides (carbaryl, chlorpyrifos, cypermethrin, and α-endosulfan and β-endosulfan and their metabolite endosulfan sulfate) in limited amounts of animal tissues containing different amounts of lipids. Selected tissues (rodent fat, liver, and brain) were extracted in a special in-house-designed mini-extractor constructed on the basis of the Soxhlet and Twisselmann extractors. A dried tissue sample placed in a small cartridge was extracted, while the nascent extract was simultaneously filtered through a layer of sodium sulfate. The extraction was followed by combined clean-up, including gel permeation chromatography (in case of high lipid content), ultrasonication, and solid-phase extraction chromatography using C 18 on silica and aluminum oxide. Gas chromatography coupled with high-resolution mass spectrometry was used for analyte separation, detection, and quantification. Average recoveries for individual insecticides ranged from 82 to 111%. Expanded measurement uncertainties were generally lower than 35%. The developed method was successfully applied to rat tissue samples obtained from an animal model dealing with insecticide exposure during brain development. This method may also be applied to the analytical treatment of small amounts of various types of animal and human tissue samples. A significant advantage achieved using this method is high sample throughput due to the simultaneous treatment of many samples. Graphical abstract Optimized workflow for the determination of selected insecticides in small amounts of animal tissue including newly developed mini-extractor.
Non-linear mixing effects on mass-47 CO2 clumped isotope thermometry: Patterns and implications.
Defliese, William F; Lohmann, Kyger C
2015-05-15
Mass-47 CO(2) clumped isotope thermometry requires relatively large (~20 mg) samples of carbonate minerals due to detection limits and shot noise in gas source isotope ratio mass spectrometry (IRMS). However, it is unreasonable to assume that natural geologic materials are homogenous on the scale required for sampling. We show that sample heterogeneities can cause offsets from equilibrium Δ(47) values that are controlled solely by end member mixing and are independent of equilibrium temperatures. A numerical model was built to simulate and quantify the effects of end member mixing on Δ(47). The model was run in multiple possible configurations to produce a dataset of mixing effects. We verified that the model accurately simulated real phenomena by comparing two artificial laboratory mixtures measured using IRMS to model output. Mixing effects were found to be dependent on end member isotopic composition in δ(13)C and δ(18)O values, and independent of end member Δ(47) values. Both positive and negative offsets from equilibrium Δ(47) can occur, and the sign is dependent on the interaction between end member isotopic compositions. The overall magnitude of mixing offsets is controlled by the amount of variability within a sample; the larger the disparity between end member compositions, the larger the mixing offset. Samples varying by less than 2 ‰ in both δ(13)C and δ(18)O values have mixing offsets below current IRMS detection limits. We recommend the use of isotopic subsampling for δ(13)C and δ(18)O values to determine sample heterogeneity, and to evaluate any potential mixing effects in samples suspected of being heterogonous. Copyright © 2015 John Wiley & Sons, Ltd.
Rapid Microbial Sample Preparation from Blood Using a Novel Concentration Device
Boardman, Anna K.; Campbell, Jennifer; Wirz, Holger; Sharon, Andre; Sauer-Budge, Alexis F.
2015-01-01
Appropriate care for bacteremic patients is dictated by the amount of time needed for an accurate diagnosis. However, the concentration of microbes in the blood is extremely low in these patients (1–100 CFU/mL), traditionally requiring growth (blood culture) or amplification (e.g., PCR) for detection. Current culture-based methods can take a minimum of two days, while faster methods like PCR require a sample free of inhibitors (i.e., blood components). Though commercial kits exist for the removal of blood from these samples, they typically capture only DNA, thereby necessitating the use of blood culture for antimicrobial testing. Here, we report a novel, scaled-up sample preparation protocol carried out in a new microbial concentration device. The process can efficiently lyse 10 mL of bacteremic blood while maintaining the microorganisms’ viability, giving a 30‑μL final output volume. A suite of six microorganisms (Staphylococcus aureus, Streptococcus pneumoniae, Escherichia coli, Haemophilus influenzae, Pseudomonas aeruginosa, and Candida albicans) at a range of clinically relevant concentrations was tested. All of the microorganisms had recoveries greater than 55% at the highest tested concentration of 100 CFU/mL, with three of them having over 70% recovery. At the lowest tested concentration of 3 CFU/mL, two microorganisms had recoveries of ca. 40–50% while the other four gave recoveries greater than 70%. Using a Taqman assay for methicillin-sensitive S. aureus (MSSA)to prove the feasibility of downstream analysis, we show that our microbial pellets are clean enough for PCR amplification. PCR testing of 56 spiked-positive and negative samples gave a specificity of 0.97 and a sensitivity of 0.96, showing that our sample preparation protocol holds great promise for the rapid diagnosis of bacteremia directly from a primary sample. PMID:25675242
Sample normalization methods in quantitative metabolomics.
Wu, Yiman; Li, Liang
2016-01-22
To reveal metabolomic changes caused by a biological event in quantitative metabolomics, it is critical to use an analytical tool that can perform accurate and precise quantification to examine the true concentration differences of individual metabolites found in different samples. A number of steps are involved in metabolomic analysis including pre-analytical work (e.g., sample collection and storage), analytical work (e.g., sample analysis) and data analysis (e.g., feature extraction and quantification). Each one of them can influence the quantitative results significantly and thus should be performed with great care. Among them, the total sample amount or concentration of metabolites can be significantly different from one sample to another. Thus, it is critical to reduce or eliminate the effect of total sample amount variation on quantification of individual metabolites. In this review, we describe the importance of sample normalization in the analytical workflow with a focus on mass spectrometry (MS)-based platforms, discuss a number of methods recently reported in the literature and comment on their applicability in real world metabolomics applications. Sample normalization has been sometimes ignored in metabolomics, partially due to the lack of a convenient means of performing sample normalization. We show that several methods are now available and sample normalization should be performed in quantitative metabolomics where the analyzed samples have significant variations in total sample amounts. Copyright © 2015 Elsevier B.V. All rights reserved.
Depth assisted compression of full parallax light fields
NASA Astrophysics Data System (ADS)
Graziosi, Danillo B.; Alpaslan, Zahir Y.; El-Ghoroury, Hussein S.
2015-03-01
Full parallax light field displays require high pixel density and huge amounts of data. Compression is a necessary tool used by 3D display systems to cope with the high bandwidth requirements. One of the formats adopted by MPEG for 3D video coding standards is the use of multiple views with associated depth maps. Depth maps enable the coding of a reduced number of views, and are used by compression and synthesis software to reconstruct the light field. However, most of the developed coding and synthesis tools target linearly arranged cameras with small baselines. Here we propose to use the 3D video coding format for full parallax light field coding. We introduce a view selection method inspired by plenoptic sampling followed by transform-based view coding and view synthesis prediction to code residual views. We determine the minimal requirements for view sub-sampling and present the rate-distortion performance of our proposal. We also compare our method with established video compression techniques, such as H.264/AVC, H.264/MVC, and the new 3D video coding algorithm, 3DV-ATM. Our results show that our method not only has an improved rate-distortion performance, it also preserves the structure of the perceived light fields better.
NASA Astrophysics Data System (ADS)
Sun, Chengjun; Jiang, Fenghua; Gao, Wei; Li, Xiaoyun; Yu, Yanzhen; Yin, Xiaofei; Wang, Yong; Ding, Haibing
2017-01-01
Detection of sulfur-oxidizing bacteria has largely been dependent on targeted gene sequencing technology or traditional cell cultivation, which usually takes from days to months to carry out. This clearly does not meet the requirements of analysis for time-sensitive samples and/or complicated environmental samples. Since energy-dispersive X-ray spectrometry (EDS) can be used to simultaneously detect multiple elements in a sample, including sulfur, with minimal sample treatment, this technology was applied to detect sulfur-oxidizing bacteria using their high sulfur content within the cell. This article describes the application of scanning electron microscopy imaging coupled with EDS mapping for quick detection of sulfur oxidizers in contaminated environmental water samples, with minimal sample handling. Scanning electron microscopy imaging revealed the existence of dense granules within the bacterial cells, while EDS identified large amounts of sulfur within them. EDS mapping localized the sulfur to these granules. Subsequent 16S rRNA gene sequencing showed that the bacteria detected in our samples belonged to the genus Chromatium, which are sulfur oxidizers. Thus, EDS mapping made it possible to identify sulfur oxidizers in environmental samples based on localized sulfur within their cells, within a short time (within 24 h of sampling). This technique has wide ranging applications for detection of sulfur bacteria in environmental water samples.
20 CFR 416.538 - Amount of underpayment or overpayment.
Code of Federal Regulations, 2010 CFR
2010-04-01
... difference between the amount paid to a recipient and the amount of payment actually due such recipient for a... difference between the amount paid and the amount actually due for that month. The period ends with the month... require no change with respect to a prior determination of overpayment or to the period relating to such...
2014-01-01
Background Previously, we evaluated a minimally invasive epidermal lipid sampling method called skin scrub, which achieved reproducible and comparable results to skin scraping. The present study aimed at investigating regional variations in canine epidermal lipid composition using the skin scrub technique and its suitability for collecting skin lipids in dogs suffering from certain skin diseases. Eight different body sites (5 highly and 3 lowly predisposed for atopic lesions) were sampled by skin scrub in 8 control dogs with normal skin. Additionally, lesional and non-lesional skin was sampled from 12 atopic dogs and 4 dogs with other skin diseases by skin scrub. Lipid fractions were separated by high performance thin layer chromatography and analysed densitometrically. Results No significant differences in total lipid content were found among the body sites tested in the control dogs. However, the pinna, lip and caudal back contained significantly lower concentrations of ceramides, whereas the palmar metacarpus and the axillary region contained significantly higher amounts of ceramides and cholesterol than most other body sites. The amount of total lipids and ceramides including all ceramide classes were significantly lower in both lesional and non-lesional skin of atopic dogs compared to normal skin, with the reduction being more pronounced in lesional skin. The sampling by skin scrub was relatively painless and caused only slight erythema at the sampled areas but no oedema. Histological examinations of skin biopsies at 2 skin scrubbed areas revealed a potential lipid extraction from the transition zone between stratum corneum and granulosum. Conclusions The present study revealed regional variations in the epidermal lipid and ceramide composition in dogs without skin abnormalities but no connection between lipid composition and predilection sites for canine atopic dermatitis lesions. The skin scrub technique proved to be a practicable sampling method for canine epidermal lipids, revealed satisfying results regarding alterations of skin lipid composition in canine atopic dermatitis and might be suitable for epidermal lipid investigations of further canine skin diseases. Although the ceramide composition should be unaffected by the deeper lipid sampling of skin scrub compared to other sampling methods, further studies are required to determine methodological differences. PMID:25012966
Code of Federal Regulations, 2010 CFR
2010-10-01
... Regulations Relating to Public Lands BUREAU OF RECLAMATION, DEPARTMENT OF THE INTERIOR RECLAMATION RURAL WATER SUPPLY PROGRAM Cost-Sharing § 404.34 Can Reclamation reduce the amount of non-Federal cost-share required...
Analysis of amino acids in nectar from pitchers of Sarracenia purpurea (Sarraceniaceae).
Dress, W; Newell, S; Nastase, A; Ford, J
1997-12-01
Sarracenia purpurea L. (northern pitcher plant) is an insectivorous plant with extrafloral nectar that attracts insects to a water-filled pitfall trap. We identified and quantified the amino acids in extrafloral nectar produced by pitchers of S. purpurea. Nectar samples were collected from 32 pitchers using a wick-sampling technique. Samples were analyzed for amino acids with reverse-phase high-performance liquid chromatography with phenylisothiocyanate derivatization. Detectable amounts of amino acids were found in each of the 32 nectar samples tested. Mean number of amino acids in a nectar sample was 9 (SD = 2.2). No amino acid was detected in all 32 samples. Mean amount of amino acids in a nectar sample (i.e., amount per wick) was 351.4 ng (SD = 113.2). Nine amino acids occurred in 20 of the 32 samples (aspartic acid, cysteine, glutamic acid, glycine, histidine, hydroxyproline, methionine, serine, valine) averaging 263.4 ng (SD = 94.9), and accounting for ~75% of the total amino acid content. Nectar production may constitute a significant cost of carnivory since the nectar contains amino acids. However, some insects prefer nectar with amino acids and presence of amino acids may increase visitation and capture of insect prey.
Duplex sampling apparatus and method
Brown, Paul E.; Lloyd, Robert
1992-01-01
An improved apparatus is provided for sampling a gaseous mixture and for measuring mixture components. The apparatus includes two sampling containers connected in series serving as a duplex sampling apparatus. The apparatus is adapted to independently determine the amounts of condensable and noncondensable gases in admixture from a single sample. More specifically, a first container includes a first port capable of selectively connecting to and disconnecting from a sample source and a second port capable of selectively connecting to and disconnecting from a second container. A second container also includes a first port capable of selectively connecting to and disconnecting from the second port of the first container and a second port capable of either selectively connecting to and disconnecting from a differential pressure source. By cooling a mixture sample in the first container, the condensable vapors form a liquid, leaving noncondensable gases either as free gases or dissolved in the liquid. The condensed liquid is heated to drive out dissolved noncondensable gases, and all the noncondensable gases are transferred to the second container. Then the first and second containers are separated from one another in order to separately determine the amount of noncondensable gases and the amount of condensable gases in the sample.
Empirically Optimized Flow Cytometric Immunoassay Validates Ambient Analyte Theory
Parpia, Zaheer A.; Kelso, David M.
2010-01-01
Ekins’ ambient analyte theory predicts, counter intuitively, that an immunoassay’s limit of detection can be improved by reducing the amount of capture antibody. In addition, it also anticipates that results should be insensitive to the volume of sample as well as the amount of capture antibody added. The objective of this study is to empirically validate all of the performance characteristics predicted by Ekins’ theory. Flow cytometric analysis was used to detect binding between a fluorescent ligand and capture microparticles since it can directly measure fractional occupancy, the primary response variable in ambient analyte theory. After experimentally determining ambient analyte conditions, comparisons were carried out between ambient and non-ambient assays in terms of their signal strengths, limits of detection, and their sensitivity to variations in reaction volume and number of particles. The critical number of binding sites required for an assay to be in the ambient analyte region was estimated to be 0.1VKd. As predicted, such assays exhibited superior signal/noise levels and limits of detection; and were not affected by variations in sample volume and number of binding sites. When the signal detected measures fractional occupancy, ambient analyte theory is an excellent guide to developing assays with superior performance characteristics. PMID:20152793
Phototargeting oral black-pigmented bacteria.
Soukos, Nikolaos S; Som, Sovanda; Abernethy, Abraham D; Ruggiero, Karriann; Dunham, Joshua; Lee, Chul; Doukas, Apostolos G; Goodson, J Max
2005-04-01
We have found that broadband light (380 to 520 nm) rapidly and selectively kills oral black-pigmented bacteria (BPB) in pure cultures and in dental plaque samples obtained from human subjects with chronic periodontitis. We hypothesize that this killing effect is a result of light excitation of their endogenous porphyrins. Cultures of Prevotella intermedia and P. nigrescens were killed by 4.2 J/cm2, whereas P. melaninogenica required 21 J/cm2. Exposure to light with a fluence of 42 J/cm2 produced 99% killing of P. gingivalis. High-performance liquid chromatography demonstrated the presence of various amounts of different porphyrin molecules in BPB. The amounts of endogenous porphyrin in BPB were 267 (P. intermedia), 47 (P. nigrescens), 41 (P. melaninogenica), and 2.2 (P. gingivalis) ng/mg. Analysis of bacteria in dental plaque samples by DNA-DNA hybridization for 40 taxa before and after phototherapy showed that the growth of the four BPB was decreased by 2 and 3 times after irradiation at energy fluences of 4.2 and 21 J/cm2, respectively, whereas the growth of the remaining 36 microorganisms was decreased by 1.5 times at both energy fluences. The present study suggests that intraoral light exposure may be used to control BPB growth and possibly benefit patients with periodontal disease.
Phototargeting Oral Black-Pigmented Bacteria
Soukos, Nikolaos S.; Som, Sovanda; Abernethy, Abraham D.; Ruggiero, Karriann; Dunham, Joshua; Lee, Chul; Doukas, Apostolos G.; Goodson, J. Max
2005-01-01
We have found that broadband light (380 to 520 nm) rapidly and selectively kills oral black-pigmented bacteria (BPB) in pure cultures and in dental plaque samples obtained from human subjects with chronic periodontitis. We hypothesize that this killing effect is a result of light excitation of their endogenous porphyrins. Cultures of Prevotella intermedia and P. nigrescens were killed by 4.2 J/cm2, whereas P. melaninogenica required 21 J/cm2. Exposure to light with a fluence of 42 J/cm2 produced 99% killing of P. gingivalis. High-performance liquid chromatography demonstrated the presence of various amounts of different porphyrin molecules in BPB. The amounts of endogenous porphyrin in BPB were 267 (P. intermedia), 47 (P. nigrescens), 41 (P. melaninogenica), and 2.2 (P. gingivalis) ng/mg. Analysis of bacteria in dental plaque samples by DNA-DNA hybridization for 40 taxa before and after phototherapy showed that the growth of the four BPB was decreased by 2 and 3 times after irradiation at energy fluences of 4.2 and 21 J/cm2, respectively, whereas the growth of the remaining 36 microorganisms was decreased by 1.5 times at both energy fluences. The present study suggests that intraoral light exposure may be used to control BPB growth and possibly benefit patients with periodontal disease. PMID:15793117
Zarejousheghani, Mashaalah; Fiedler, Petra; Möder, Monika; Borsdorf, Helko
2014-11-01
A novel approach for the selective extraction of organic target compounds from water samples has been developed using a mixed-bed solid phase extraction (mixed-bed SPE) technique. The molecularly imprinted polymer (MIP) particles are embedded in a network of silica gel to form a stable uniform porous bed. The capabilities of this method are demonstrated using atrazine as a model compound. In comparison to conventional molecularly imprinted-solid phase extraction (MISPE), the proposed mixed-bed MISPE method in combination with gas chromatography-mass spectrometry (GC-MS) analysis enables more reproducible and efficient extraction performance. After optimization of operational parameters (polymerization conditions, bed matrix ingredients, polymer to silica gel ratio, pH of the sample solution, breakthrough volume plus washing and elution conditions), improved LODs (1.34 µg L(-1) in comparison to 2.25 µg L(-1) obtained using MISPE) and limits of quantification (4.5 µg L(-1) for mixed-bed MISPE and 7.5 µg L(-1) for MISPE) were observed for the analysis of atrazine. Furthermore, the relative standard deviations (RSDs) for atrazine at concentrations between 5 and 200 µg L(-1) ranged between 1.8% and 6.3% compared to MISPE (3.5-12.1%). Additionally, the column-to-column reproducibility for the mixed-bed MISPE was significantly improved to 16.1%, compared with 53% that was observed for MISPE. Due to the reduced bed-mass sorbent and at optimized conditions, the total amount of organic solvents required for conditioning, washing and elution steps reduced from more than 25 mL for conventional MISPE to less than 2 mL for mixed-bed MISPE. Besides reduced organic solvent consumption, total sample preparation time of the mixed-bed MISPE method relative to the conventional MISPE was reduced from more than 20 min to less than 10 min. The amount of organic solvent required for complete elution diminished from 3 mL (conventional MISPE) to less than 0.4 mL with the mixed-bed technique shows its inherent potential for online operation with an analytical instrument. In order to evaluate the selectivity and matrix effects of the developed mixed-bed MISPE method, it was applied as an extraction technique for atrazine from environmental wastewater and river water samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Continous Representation Learning via User Feedback
DOE Office of Scientific and Technical Information (OSTI.GOV)
Representation learning is a deep-learning based technique for extracting features from data for the purpose of machine learning. This requires a large amount of data, on order tens of thousands to millions of samples, to properly teach the deep neural network. This a system for continuous representation learning, where the system may be improved with a small number of additional samples (order 10-100). The unique characteristics of this invention include a human-computer feedback component, where assess the quality of the current representation and then provides a better representation to the system. The system then mixes the new data with oldmore » training examples to avoid overfitting and improve overall performance of the system. The model can be exported and shared with other users, and it may be applied to additional images the system hasn't seen before.« less
Cozzolino, Daniel
2015-03-30
Vibrational spectroscopy encompasses a number of techniques and methods including ultra-violet, visible, Fourier transform infrared or mid infrared, near infrared and Raman spectroscopy. The use and application of spectroscopy generates spectra containing hundreds of variables (absorbances at each wavenumbers or wavelengths), resulting in the production of large data sets representing the chemical and biochemical wine fingerprint. Multivariate data analysis techniques are then required to handle the large amount of data generated in order to interpret the spectra in a meaningful way in order to develop a specific application. This paper focuses on the developments of sample presentation and main sources of error when vibrational spectroscopy methods are applied in wine analysis. Recent and novel applications will be discussed as examples of these developments. © 2014 Society of Chemical Industry.
Analytical aspects of hydrogen exchange mass spectrometry
Engen, John R.; Wales, Thomas E.
2016-01-01
The analytical aspects of measuring hydrogen exchange by mass spectrometry are reviewed. The nature of analytical selectivity in hydrogen exchange is described followed by review of the analytical tools required to accomplish fragmentation, separation, and the mass spectrometry measurements under restrictive exchange quench conditions. In contrast to analytical quantitation that relies on measurements of peak intensity or area, quantitation in hydrogen exchange mass spectrometry depends on measuring a mass change with respect to an undeuterated or deuterated control, resulting in a value between zero and the maximum amount of deuterium that could be incorporated. Reliable quantitation is a function of experimental fidelity and to achieve high measurement reproducibility, a large number of experimental variables must be controlled during sample preparation and analysis. The method also reports on important qualitative aspects of the sample, including conformational heterogeneity and population dynamics. PMID:26048552
NASA Technical Reports Server (NTRS)
Hubbs, J. E.; Nofziger, M. J.; Bartell, F. O.; Wolfe, W. L.; Brooks, L. D.
1982-01-01
The Infrared Astronomical Satellite (IRAS) telescope has an outer shield on it which is used to reduce the amount of thermal radiation that enters the telescope. The shield forms the first part of the baffle structure which reduces the photon incidence on the focal plane. It was, therefore, necessary to model this structure for scattering, and a required input for such modeling is the scattering characteristic of this surface. Attention is given to the measurement of the bidirectional reflectance distribution function (BRDF), the reflected radiance divided by the incident irradiance at 10.6 micrometers, 118 micrometers, and at several angles of incidence. Visual observation of the gold sample shows that there are striations which line up in a single direction. The data were, therefore, taken with the sample oriented in each of two directions.
Water-management models in Florida from ERTS-1 data
NASA Technical Reports Server (NTRS)
Higer, A. L. (Principal Investigator); Rogers, R. H.; Coker, A. E.; Cordes, E. H.
1975-01-01
The author has identified the following significant results. The usefullness of ERTS 1 to improving the overall effectiveness of collecting and disseminating data was evaluated. ERTS MSS imagery and in situ monitoring by DCS were used to evaluate their separate and combined capabilities. Twenty data collection platforms were established in southern Florida. Water level and rainfall measurements were collected and disseminated to users in less than 2 hours, a significant improvement over conventional techniques requiring 2 months. ERTS imagery was found to significantly enhance the utility of ground measurements. Water stage was correlated with water surface areas from imagery in order to obtain water stage-volume relations. Imagery provided an economical basis for extrapolating water parameters from the point samples to unsampled data and provided a synoptic view of water mass boundaries that no amount of ground sampling or monitoring could provide.
Effect of purity on adsorption capacities of a Mars-like clay mineral at different pressures
NASA Technical Reports Server (NTRS)
Jenkins, Traci; Mcdoniel, Bridgett; Bustin, Roberta; Allton, Judith H.
1992-01-01
There has been considerable interest in adsorption of carbon dioxide on Marslike clay minerals. Some estimates of the carbon dioxide reservoir capacity of the martian regolith were calculated from the amount of carbon dioxide adsorbed on the ironrich smectite nontronite under martian conditions. The adsorption capacity of pure nontronite could place upper limits on the regolith carbon dioxide reservoir, both at present martian atmospheric pressure and at the postulated higher pressures required to permit liquid water on the surface. Adsorption of carbon dioxide on a Clay Mineral Society standard containing nontronite was studied over a wide range of pressures in the absence of water. Similar experiments were conducted on the pure nontronite extracted from the natural sample. Heating curves were obtained to help characterize and determine the purity of the clay sample.
NASA Technical Reports Server (NTRS)
Nebenfuhr, A.; Lomax, T. L.
1998-01-01
We have developed an improved method for determination of gene expression levels with RT-PCR. The procedure is rapid and does not require extensive optimization or densitometric analysis. Since the detection of individual transcripts is PCR-based, small amounts of tissue samples are sufficient for the analysis of expression patterns in large gene families. Using this method, we were able to rapidly screen nine members of the Aux/IAA family of auxin-responsive genes and identify those genes which vary in message abundance in a tissue- and light-specific manner. While not offering the accuracy of conventional semi-quantitative or competitive RT-PCR, our method allows quick screening of large numbers of genes in a wide range of RNA samples with just a thermal cycler and standard gel analysis equipment.
Influence of persistent exchangeable oxygen on biogenic silica δ18O in deep sea cores
NASA Astrophysics Data System (ADS)
Menicucci, A. J.; Spero, H. J.
2016-12-01
The removal of exchangeable oxygen from biogenic opal prior to IRMS analysis is critical during sample preparation. Exchangeable oxygen is found in the form of hydroxyl and between defects within the amorphous silicate lattice structure. Typical analytical procedures utilize a variety of dehydroxylation methods to eliminate this exchangeable oxygen, including vacuum dehydroxylation and prefluorination. Such methods are generally considered sufficient for elimination of non-lattice bound oxygen that would obfuscate environmental oxygen isotopic signals contained within the silicate tetrahedra. δ18O data that are then empirically calibrated against modern hydrographic data, and applied down core in paleoceanographic applications. We have conducted a suite of experiments on purified marine opal samples using the new microfluorination method (Menicucci et al., 2013). Our data demonstrate that the amount of exchangeable oxygen in biogenic opal decreases as sample age/depth in core increases. These changes are not accounted for by current researchers. Further, our experimental data indicate that vacuum dehydroxylation does not eliminate all exchangeable oxygen, even after hydroxyl is undetectable. We have conducted experiments to quantify the amount of time necessary to ensure vacuum dehydroxylation has eliminated exchangeable oxygen so that opal samples are stable prior to δ18Odiatom analysis. Our experiments suggest that previously generated opal δ18O data may contain a variable down-core offset due to the presence of exchangeable, non-lattice bound oxygen sources. Our experiments indicate that diatom silica requires dehydroxylation for ≥ 44 hours at 1060oC to quantitatively remove all non-lattice bound oxygen. Further, this variable amount of exchangeable oxygen may be responsible for some of the disagreement between existing empirical calibrations based on core-top diatom frustule remains. Analysis of δ18Odiatom values after this long vacuum dehydroxylation time is necessary for quantitative comparisons of stable isotopic values across geologic time periods. Menicucci, A. J., et al. (2013). "Oxygen isotope analyses of biogenic opal and quartz using a novel microfluorination technique." Rapid Communications in Mass Spectrometry 27(16): 1873-1881
Acid-base titrations using microfluidic paper-based analytical devices.
Karita, Shingo; Kaneta, Takashi
2014-12-16
Rapid and simple acid-base titration was accomplished using a novel microfluidic paper-based analytical device (μPAD). The μPAD was fabricated by wax printing and consisted of ten reservoirs for reaction and detection. The reaction reservoirs contained various amounts of a primary standard substance, potassium hydrogen phthalate (KHPth), whereas a constant amount of phenolphthalein was added to all the detection reservoirs. A sample solution containing NaOH was dropped onto the center of the μPAD and was allowed to spread to the reaction reservoirs where the KHPth neutralized it. When the amount of NaOH exceeded that of the KHPth in the reaction reservoirs, unneutralized hydroxide ion penetrated the detection reservoirs, resulting in a color reaction from the phenolphthalein. Therefore, the number of the detection reservoirs with no color change determined the concentration of the NaOH in the sample solution. The titration was completed within 1 min by visually determining the end point, which required neither instrumentation nor software. The volumes of the KHPth and phenolphthalein solutions added to the corresponding reservoirs were optimized to obtain reproducible and accurate results for the concentration of NaOH. The μPADs determined the concentration of NaOH at orders of magnitude ranging from 0.01 to 1 M. An acid sample, HCl, was also determined using Na2CO3 as a primary standard substance instead of KHPth. Furthermore, the μPAD was applicable to the titrations of nitric acid, sulfuric acid, acetic acid, and ammonia solutions. The μPADs were stable for more than 1 month when stored in darkness at room temperature, although this was reduced to only 5 days under daylight conditions. The analysis of acidic hot spring water was also demonstrated in the field using the μPAD, and the results agreed well with those obtained by classic acid-base titration.
Zhang, Zhenbin; Dovichi, Norman J
2018-02-25
The effects of MS1 injection time, MS2 injection time, dynamic exclusion time, intensity threshold, and isolation width were investigated on the numbers of peptide and protein identifications for single-shot bottom-up proteomics analysis using CZE-MS/MS analysis of a Xenopus laevis tryptic digest. An electrokinetically pumped nanospray interface was used to couple a linear-polyacrylamide coated capillary to a Q Exactive HF mass spectrometer. A sensitive method that used a 1.4 Th isolation width, 60,000 MS2 resolution, 110 ms MS2 injection time, and a top 7 fragmentation produced the largest number of identifications when the CZE loading amount was less than 100 ng. A programmable autogain control method (pAGC) that used a 1.4 Th isolation width, 15,000 MS2 resolution, 110 ms MS2 injection time, and top 10 fragmentation produced the largest number of identifications for CZE loading amounts greater than 100 ng; 7218 unique peptides and 1653 protein groups were identified from 200 ng by using the pAGC method. The effect of mass spectrometer conditions on the performance of UPLC-MS/MS was also investigated. A fast method that used a 1.4 Th isolation width, 30,000 MS2 resolution, 45 ms MS2 injection time, and top 12 fragmentation produced the largest number of identifications for 200 ng UPLC loading amount (6025 unique peptides and 1501 protein groups). This is the first report where the identification number for CZE surpasses that of the UPLC at the 200 ng loading level. However, more peptides (11476) and protein groups (2378) were identified by using UPLC-MS/MS when the sample loading amount was increased to 2 μg with the fast method. To exploit the fast scan speed of the Q-Exactive HF mass spectrometer, higher sample loading amounts are required for single-shot bottom-up proteomics analysis using CZE-MS/MS. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Jarosch, Klaus; Doolette, Ashlea; Smernik, Ronald; Frossard, Emmanuel; Bünemann, Else K.
2014-05-01
Solution 31P NMR spectroscopy is a powerful tool for the characterisation and quantification of organic P classes in soil. Potential limitations are due to costs, equipment accessibility and the requirement of relatively large amounts of sample. A recent alternative approach for the quantification of specific organic P classes is the use of substrate-specific phosphohydrolase enzymes which cleave the inorganic orthophosphate from the organic moiety. The released orthophosphate is detectable by colorimetry. Conclusions about the hydrolysed class of organic P can be made based on the comparison of inorganic P concentrations in enzymatically treated and untreated samples. The aim of this study was to test the applicability of enzyme addition assays for the characterisation of organic P classes on a) NaOH-EDTA extracts, b) soil:water filtrates (0.2 μm) and c) soil:water suspensions. The organic P classes in NaOH-EDTA extracts were also determined by 31P NMR spectroscopy, enabling a comparison between methods. Ten topsoil samples from four continents (five cambisols, two ferralsols, two luvisols and one lixisol) with varying total P content (83 - 1,1560 mg kg-1), pHH2O (4.2 - 8.0) and land management (grassland or cropped land) were analysed. Four different classes of organic P were determined by the enzyme addition assay: 1) monoester like-P (by an acid phosphatase known to hydrolyse simple monoesters, pyrophosphate and ATP), 2) DNA-like P (by a nuclease in combination with an acid phosphatase), 3) inositol phosphate-like P (by a phytase known to hydrolyse all monoester like-P plus myo-inositol hexakisphosphate and scyllo-inositol hexakisphosphate) and 4) enzyme stable-P (enzymatically not hydrolysed organic P forms). In the ten topsoil samples, NaOH-EDTA-extractable organic P ranged from 6 - 1,115 mg P kg-1 soil. Of this, 33 - 92 % was enzyme labile, with inositol phosphate-like P being the largest organic P class in most soils (15 - 51%), followed by monoester-like P (10 - 47%) and DNA-like P (0 - 15%). The four soil organic P classes detected by either 31P NMR spectroscopy or enzyme addition assays were well correlated with each other (R2 0.93 - 0.99). In soil:water filtrates, 0.1 - 4.1 mg enzyme-labile P kg-1 soil were detected, which consisted mainly of inositol phosphate-like P. In some soils, a low absolute amount of water-soluble organic P hindered a more detailed characterisation. In soil:water suspensions, enzyme-labile organic P ranged from 4.3 - 12.6 mg P kg-1 soil. However, the enzyme addition assay was only applicable on three soils, since in the other soils i) added enzymes were partly inhibited in soil:water suspensions and ii) the hydrolysis of organic P classes by soil intrinsic enzymes could not be accounted for. In conclusion, enzyme addition assays appear to be a promising approach for a rapid determination of four main soil organic P classes in NaOH-EDTA extracts. Especially the small amount of required sample size (< 1ml) and the relatively simple instrumentation facilitate a rapid and cheap analysis on these extracts. Application of this method is also possible on soil:water filtrates, but low amounts of organic P may hinder detailed analysis. Key words: soil organic phosphorus characterisation, enzyme addition assays, 31P NMR spectroscopy, soil suspensions, soil filtrate
Fluid sampling apparatus and method
Yeamans, David R.
1998-01-01
Incorporation of a bellows in a sampling syringe eliminates ingress of contaminants, permits replication of amounts and compression of multiple sample injections, and enables remote sampling for off-site analysis.
Kellogg, Joshua J; Wallace, Emily D; Graf, Tyler N; Oberlies, Nicholas H; Cech, Nadja B
2017-10-25
Metabolomics has emerged as an important analytical technique for multiple applications. The value of information obtained from metabolomics analysis depends on the degree to which the entire metabolome is present and the reliability of sample treatment to ensure reproducibility across the study. The purpose of this study was to compare methods of preparing complex botanical extract samples prior to metabolomics profiling. Two extraction methodologies, accelerated solvent extraction and a conventional solvent maceration, were compared using commercial green tea [Camellia sinensis (L.) Kuntze (Theaceae)] products as a test case. The accelerated solvent protocol was first evaluated to ascertain critical factors influencing extraction using a D-optimal experimental design study. The accelerated solvent and conventional extraction methods yielded similar metabolite profiles for the green tea samples studied. The accelerated solvent extraction yielded higher total amounts of extracted catechins, was more reproducible, and required less active bench time to prepare the samples. This study demonstrates the effectiveness of accelerated solvent as an efficient methodology for metabolomics studies. Copyright © 2017. Published by Elsevier B.V.
40 CFR 31.52 - Collection of amounts due.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Section 31.52 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GRANTS AND OTHER FEDERAL ASSISTANCE UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND COOPERATIVE AGREEMENTS TO STATE AND LOCAL GOVERNMENTS After-the-Grant Requirements § 31.52 Collection of amounts due. (a) Any funds paid to a grantee in...
18 CFR 284.270 - Reporting requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Reporting requirements. 284.270 Section 284.270 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION...; (4) The estimated total amount and average daily amount of emergency natural gas to be purchased...
18 CFR 284.270 - Reporting requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Reporting requirements. 284.270 Section 284.270 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION... the emergency; (4) The estimated total amount and average daily amount of emergency natural gas to be...
18 CFR 284.270 - Reporting requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Reporting requirements. 284.270 Section 284.270 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION...; (4) The estimated total amount and average daily amount of emergency natural gas to be purchased...
18 CFR 284.270 - Reporting requirements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Reporting requirements. 284.270 Section 284.270 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION... the emergency; (4) The estimated total amount and average daily amount of emergency natural gas to be...
18 CFR 284.270 - Reporting requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Reporting requirements. 284.270 Section 284.270 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION... the emergency; (4) The estimated total amount and average daily amount of emergency natural gas to be...
Fluorometric determination of the DNA concentration in municipal drinking water.
McCoy, W F; Olson, B H
1985-01-01
DNA concentrations in municipal drinking water samples were measured by fluorometry, using Hoechst 33258 fluorochrome. The concentration, extraction, and detection methods used were adapted from existing techniques. The method is reproducible, fast, accurate, and simple. The amounts of DNA per cell for five different bacterial isolates obtained from drinking water samples were determined by measuring DNA concentration and total cell concentration (acridine orange epifluorescence direct cell counting) in stationary pure cultures. The relationship between DNA concentration and epifluorescence total direct cell concentration in 11 different drinking water samples was linear and positive; the amounts of DNA per cell in these samples did not differ significantly from the amounts in pure culture isolates. We found significant linear correlations between DNA concentration and colony-forming unit concentration, as well as between epifluorescence direct cell counts and colony-forming unit concentration. DNA concentration measurements of municipal drinking water samples appear to monitor changes in bacteriological quality at least as well as total heterotrophic plate counting and epifluorescence direct cell counting. PMID:3890737
Kwak, Byung-Man; Jeong, In-Seek; Lee, Moon-Seok; Ahn, Jang-Hyuk; Park, Jong-Su
2014-12-15
A rapid and simple sample preparation method for vitamin D3 (cholecalciferol) was developed for emulsified dairy products such as milk-based infant formulas. A sample was mixed in a 50 mL centrifuge tube with the same amount of water and isopropyl alcohol to achieve chemical extraction. Ammonium sulfate was used to induce phase separation. No-heating saponification was performed in the sample tube by adding KOH, NaCl, and NH3. Vitamin D3 was then separated and quantified using liquid chromatography-tandem mass spectrometry. The results for added recovery tests were in the range 93.11-110.65%, with relative standard deviations between 2.66% and 2.93%. The results, compared to those obtained using a certified reference material (SRM 1849a), were within the range of the certificated values. This method could be implemented in many laboratories that require time and labour saving. Copyright © 2014 Elsevier Ltd. All rights reserved.
Drop-on-Demand Sample Delivery for Studying Biocatalysts in Action at XFELs
Fuller, Franklin D.; Gul, Sheraz; Chatterjee, Ruchira; Burgie, Ernest S.; Young, Iris D.; Lebrette, Hugo; Srinivas, Vivek; Brewster, Aaron S.; Michels-Clark, Tara; Clinger, Jonathan A.; Andi, Babak; Ibrahim, Mohamed; Pastor, Ernest; de Lichtenberg, Casper; Hussein, Rana; Pollock, Christopher J.; Zhang, Miao; Stan, Claudiu A.; Kroll, Thomas; Fransson, Thomas; Weninger, Clemens; Kubin, Markus; Aller, Pierre; Lassalle, Louise; Bräuer, Philipp; Miller, Mitchell D.; Amin, Muhamed; Koroidov, Sergey; Roessler, Christian G.; Allaire, Marc; Sierra, Raymond G.; Docker, Peter T.; Glownia, James M.; Nelson, Silke; Koglin, Jason E.; Zhu, Diling; Chollet, Matthieu; Song, Sanghoon; Lemke, Henrik; Liang, Mengning; Sokaras, Dimosthenis; Alonso-Mori, Roberto; Zouni, Athina; Messinger, Johannes; Bergmann, Uwe; Boal, Amie K.; Bollinger, J. Martin; Krebs, Carsten; Högbom, Martin; Phillips, George N.; Vierstra, Richard D.; Sauter, Nicholas K.; Orville, Allen M.; Kern, Jan; Yachandra, Vittal K.; Yano, Junko
2017-01-01
X-ray crystallography at X-ray free-electron laser (XFEL) sources is a powerful method for studying macromolecules at biologically relevant temperatures. Moreover, when combined with complementary techniques like X-ray emission spectroscopy (XES), both global structures and chemical properties of metalloenzymes can be obtained concurrently, providing new insights into the interplay between the protein structure/dynamics and chemistry at an active site. Implementing such a multimodal approach can be compromised by conflicting requirements to optimize each individual method. In particular, the method used for sample delivery greatly impacts the data quality. We present here a new, robust way of delivering controlled sample amounts on demand using acoustic droplet ejection coupled with a conveyor belt drive that is optimized for crystallography and spectroscopy measurements of photochemical and chemical reactions over a wide range of time scales. Studies with photosystem II, the phytochrome photoreceptor, and ribonucleotide reductase R2 illustrate the power and versatility of this method. PMID:28250468
Determination of residual solvents in pharmaceuticals by thermal desorption-GC/MS.
Hashimoto, K; Urakami, K; Fujiwara, Y; Terada, S; Watanabe, C
2001-05-01
A novel method for the determination of residual solvents in pharmaceuticals by thermal desorption (TD)-GC/MS has been established. A programmed temperature pyrolyzer (double shot pyrolyzer) is applied for the TD. This method does not require any sample pretreatment and allows very small amounts of the sample. Directly desorbed solvents from intact pharmaceuticals (ca. 1 mg) in the desorption cup (5 mm x 3.8 mm i.d.) were cryofocused at the head of a capillary column prior to a GC/MS analysis. The desorption temperature was set at a point about 20 degrees C higher than the melting point of each sample individually, and held for 3 min. The analytical results using 7 different pharmaceuticals were in agreement with those obtained by direct injection (DI) of the solution, followed by USP XXIII. This proposed TD-GC/MS method was demonstrated to be very useful for the identification and quantification of residual solvents. Furthermore, this method was simple, allowed rapid analysis and gave good repeatability.
Drop-on-demand sample delivery for studying biocatalysts in action at X-ray free-electron lasers.
Fuller, Franklin D; Gul, Sheraz; Chatterjee, Ruchira; Burgie, E Sethe; Young, Iris D; Lebrette, Hugo; Srinivas, Vivek; Brewster, Aaron S; Michels-Clark, Tara; Clinger, Jonathan A; Andi, Babak; Ibrahim, Mohamed; Pastor, Ernest; de Lichtenberg, Casper; Hussein, Rana; Pollock, Christopher J; Zhang, Miao; Stan, Claudiu A; Kroll, Thomas; Fransson, Thomas; Weninger, Clemens; Kubin, Markus; Aller, Pierre; Lassalle, Louise; Bräuer, Philipp; Miller, Mitchell D; Amin, Muhamed; Koroidov, Sergey; Roessler, Christian G; Allaire, Marc; Sierra, Raymond G; Docker, Peter T; Glownia, James M; Nelson, Silke; Koglin, Jason E; Zhu, Diling; Chollet, Matthieu; Song, Sanghoon; Lemke, Henrik; Liang, Mengning; Sokaras, Dimosthenis; Alonso-Mori, Roberto; Zouni, Athina; Messinger, Johannes; Bergmann, Uwe; Boal, Amie K; Bollinger, J Martin; Krebs, Carsten; Högbom, Martin; Phillips, George N; Vierstra, Richard D; Sauter, Nicholas K; Orville, Allen M; Kern, Jan; Yachandra, Vittal K; Yano, Junko
2017-04-01
X-ray crystallography at X-ray free-electron laser sources is a powerful method for studying macromolecules at biologically relevant temperatures. Moreover, when combined with complementary techniques like X-ray emission spectroscopy, both global structures and chemical properties of metalloenzymes can be obtained concurrently, providing insights into the interplay between the protein structure and dynamics and the chemistry at an active site. The implementation of such a multimodal approach can be compromised by conflicting requirements to optimize each individual method. In particular, the method used for sample delivery greatly affects the data quality. We present here a robust way of delivering controlled sample amounts on demand using acoustic droplet ejection coupled with a conveyor belt drive that is optimized for crystallography and spectroscopy measurements of photochemical and chemical reactions over a wide range of time scales. Studies with photosystem II, the phytochrome photoreceptor, and ribonucleotide reductase R2 illustrate the power and versatility of this method.
Zheng, Jinkai; Fang, Xiang; Cao, Yong; Xiao, Hang; He, Lili
2013-01-01
To develop an accurate and convenient method for monitoring the production of citrus-derived bioactive 5-demethylnobiletin from demethylation reaction of nobiletin, we compared surface enhanced Raman spectroscopy (SERS) methods with a conventional HPLC method. Our results show that both the substrate-based and solution-based SERS methods correlated with HPLC method very well. The solution method produced lower root mean square error of calibration and higher correlation coefficient than the substrate method. The solution method utilized an ‘affinity chromatography’-like procedure to separate the reactant nobiletin from the product 5-demthylnobiletin based on their different binding affinity to the silver dendrites. The substrate method was found simpler and faster to collect the SERS ‘fingerprint’ spectra of the samples as no incubation between samples and silver was needed and only trace amount of samples were required. Our results demonstrated that the SERS methods were superior to HPLC method in conveniently and rapidly characterizing and quantifying 5-demethylnobiletin production. PMID:23885986
Schminke, G; Seubert, A
2000-02-01
An established method for the determination of the disinfection by-product bromate is ion chromatography (IC). This paper presents a comparison of three IC methods based on either conductivity detection (IC-CD), a post-column-reaction (IC-PCR-VIS) or the on-line-coupling with inductively coupled plasma mass spectrometry (IC-ICP-MS). Main characteristics of the methods such as method detection limits (MDL), time of analysis and sample pretreatment are compared and applicability for routine analysis is critically discussed. The most sensitive and rugged method is IC-ICP-MS, followed by IC-PCR-VIS. The photometric detection is subject to a minor interference in real world samples, presumably caused by carbonate. The lowest sensitivity is shown by the IC-CD method as slowest method compared, which, in addition, requires a sample pretreatment. The highest amount of information is delivered by IC-PCR-VIS, which allows the simultaneous determination of the seven standard anions and bromate.
Ferromagnetic resonance studies of lunar core stratigraphy
NASA Technical Reports Server (NTRS)
Housley, R. M.; Cirlin, E. H.; Goldberg, I. B.; Crowe, H.
1976-01-01
We first review the evidence which links the characteristic ferromagnetic resonance observed in lunar fines samples with agglutinatic glass produced primarily by micrometeorite impacts and present new results on Apollo 15, 16, and 17 breccias which support this link by showing that only regolith breccias contribute significantly to the characteristic FMR intensity. We then provide a calibration of the amount of Fe metal in the form of uniformly magnetized spheres required to give our observed FMR intensities and discuss the theoretical magnetic behavior to be expected of Fe spheres as a function of size. Finally, we present FMR results on samples from every 5 mm interval in the core segments 60003, 60009, and 70009. These results lead us to suggest: (1) that secondary mixing may generally be extensive during regolith deposition so that buried regolith surfaces are hard to recognize or define; and (2) that local grinding of rocks and pebbles during deposition may lead to short scale fluctuations in grain size, composition, and apparent exposure age of samples.
Kline, Margaret C; Duewer, David L; Travis, John C; Smith, Melody V; Redman, Janette W; Vallone, Peter M; Decker, Amy E; Butler, John M
2009-06-01
Modern highly multiplexed short tandem repeat (STR) assays used by the forensic human-identity community require tight control of the initial amount of sample DNA amplified in the polymerase chain reaction (PCR) process. This, in turn, requires the ability to reproducibly measure the concentration of human DNA, [DNA], in a sample extract. Quantitative PCR (qPCR) techniques can determine the number of intact stretches of DNA of specified nucleotide sequence in an extremely small sample; however, these assays must be calibrated with DNA extracts of well-characterized and stable composition. By 2004, studies coordinated by or reported to the National Institute of Standards and Technology (NIST) indicated that a well-characterized, stable human DNA quantitation certified reference material (CRM) could help the forensic community reduce within- and among-laboratory quantitation variability. To ensure that the stability of such a quantitation standard can be monitored and that, if and when required, equivalent replacement materials can be prepared, a measurement of some stable quantity directly related to [DNA] is required. Using a long-established conventional relationship linking optical density (properly designated as decadic attenuance) at 260 nm with [DNA] in aqueous solution, NIST Standard Reference Material (SRM) 2372 Human DNA Quantitation Standard was issued in October 2007. This SRM consists of three quite different DNA extracts: a single-source male, a multiple-source female, and a mixture of male and female sources. All three SRM components have very similar optical densities, and thus very similar conventional [DNA]. The materials perform very similarly in several widely used gender-neutral assays, demonstrating that the combination of appropriate preparation methods and metrologically sound spectrophotometric measurements enables the preparation and certification of quantitation [DNA] standards that are both maintainable and of practical utility.
What Do Adolescents Know about Aging?
ERIC Educational Resources Information Center
Steitz, Jean A.; Verner, Betty S.
Increasing the amount of contact with older persons is often proposed as a way to inform young people about aging. This study compared adolescents' knowledge of aging with the amount and quality of contact they had with an older person and compared knowledge of aging in a 1978 sample of adolescents with knowledge in a 1985 sample. For the 1985…
Smith, Joseph M.; Wells, Sarah P.; Mather, Martha E.; Muth, Robert M.
2014-01-01
When researchers and managers initiate sampling on a new stream or river system, they do not know how effective each gear type is and whether their sampling effort is adequate. Although the types and amount of gear may be different for other studies, systems, and research questions, the five-step process described here for making sampling decisions and evaluating sampling efficiency can be applied widely to any system to restore, manage, and conserve aquatic ecosystems. It is believed that incorporating this gear-evaluation process into a wide variety of studies and ecosystems will increase rigour within and across aquatic biodiversity studies.
Exploring the feasibility of obtaining mifepristone and misoprostol from the internet.
Murtagh, Chloe; Wells, Elisa; Raymond, Elizabeth G; Coeytaux, Francine; Winikoff, Beverly
2018-04-01
We aimed to document the experience of buying abortion pills from online vendors that do not require a prescription and to evaluate the active ingredient content of the pills received. We searched the internet to identify a convenience sample of websites that sold mifepristone and misoprostol to purchasers in the United States and attempted to order these products. We documented price, shipping time and other aspects of ordering. We sent the samples received to a testing laboratory that measured the amount of active ingredient in individual tablets. We identified 18 websites and ordered 22 products: 20 mifepristone-misoprostol combination products and 2 that contained only misoprostol. We received 18 combination products and the 2 misoprostol products from 16 different sites. No site required a prescription or any relevant medical information. The time between order and receipt of the 20 products ranged from 3 to 21 business days (median 9.5 days). The price for the 18 combination products ranged from $110 to $360, including shipping and fees; the products without mifepristone cost less. Chemical assays found that the 18 tablets labeled 200 mg mifepristone contained between 184.3 mg and 204.1 mg mifepristone, while the 20 tablets labeled 200 mcg misoprostol contained between 34.1 mcg and 201.4 mcg of the active ingredient. Obtaining abortion medications from online pharmaceutical websites is feasible in the United States. The mifepristone tablets received contained within 8% of the labeled amount of active agent. The misoprostol tablets all contained that compound but usually less than labeled. Given our findings, we expect that some people for whom clinic-based abortion is not easily available or acceptable may consider self-sourcing pills from the internet to be a rational option. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Determining Reduced Order Models for Optimal Stochastic Reduced Order Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonney, Matthew S.; Brake, Matthew R.W.
2015-08-01
The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better representmore » the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.« less
Amount of Genetics Education is Low Among Didactic Programs in Dietetics.
Beretich, Kaitlan; Pope, Janet; Erickson, Dawn; Kennedy, Angela
2017-01-01
Nutritional genomics is a growing area of research. Research has shown registered dietitian nutritionists (RDNs) have limited knowledge of genetics. Limited research is available regarding how didactic programs in dietetics (DPDs) meet the genetics knowledge requirement of the Accreditation Council for Education in Nutrition and Dietetics (ACEND®). The purpose of this study was to determine the extent to which the study of nutritional genomics is incorporated into undergraduate DPDs in response to the Academy of Nutrition and Dietetics position statement on nutritional genomics. The sample included 62 DPD directors in the U.S. Most programs (63.9%) reported the ACEND genetics knowledge requirement was being met by integrating genetic information into the current curriculum. However, 88.7% of programs reported devoting only 1-10 clock hours to genetics education. While 60.3% of directors surveyed reported they were confident in their program's ability to teach information related to genetics, only 6 directors reported having specialized training in genetics. The overall amount of clock hours devoted to genetics education is low. DPD directors, faculty, and instructors are not adequately trained to provide this education to students enrolled in DPDs. Therefore, the primary recommendation of this study is the development of a standardized curriculum for genetics education in DPDs.
27 CFR 44.123 - Amount of bond.
Code of Federal Regulations, 2011 CFR
2011-04-01
... bond: Provided, That the amount of any such bond (or the total amount where original and strengthening... received into the warehouse until a strengthening or superseding bond is filed, as required by § 44.124 or...
27 CFR 44.123 - Amount of bond.
Code of Federal Regulations, 2010 CFR
2010-04-01
... bond: Provided, That the amount of any such bond (or the total amount where original and strengthening... received into the warehouse until a strengthening or superseding bond is filed, as required by § 44.124 or...
The relationship between students’ study habits, happiness and depression
Bahrami, Susan; Rajaeepour, Saeed; Rizi, Hasan Ashrafi; Zahmatkesh, Monereh; Nematolahi, Zahra
2011-01-01
BACKGROUND: One of the important requirements for cultural, social and even economic development is having a book-loving nation. In order to achieve this, there is a need for purposeful and continuous programming. The purpose of this research was to determine the relationship between students’ study habits, happiness and depression in Isfahan University of Medical Science. METHODS: This research was a kind of descriptive and correlation survey. Statistical population included all MSc and PhD students in the second semester of the Isfahan University of Medical Science (263 students). In this research, stratified and random sampling was used in which a sample of 100 students was selected. Data collection instruments were Beck Depression Inventory (BDI), Oxford Happiness Inventory and a researcher-made questionnaire to determine the amount of students’ study. Validity of this questionnaires was determined by structure and content related validity and its reliability was calculated by Cronbach's alpha coefficient for the first (r = 0.94), second (r = 0.91) and third (r = 0.85) questionnaire. Analysis of research findings was done through descriptive and inferential statistics. RESULTS: Findings showed that 68.8 percent of students study less than 5 hours and only 2.5 percent of students study more than 10 hours. 65 percent of students had high amount of happiness and 35 percent had medium amount of happiness. In 60 percent of students there was no symptom of depression and 7.5 had depression symptoms. Also, there was no significant relationship between happiness and studying but there was a significant and negative relationship between studying and depression and happiness and depression. CONCLUSIONS: The amount of study and tendency for reading are among the most important indices of human growth in terms of potential abilities for achieving a perfect human life and to prevent one-dimensional thinking. Thus, finding ways to encourage students to study is considered essential to achieve a healthy and developed society. PMID:22224110
Mayville, Francis C; Wigent, Rodney J; Schwartz, Joseph B
2006-01-01
The purpose of this work was to determine the total amount of water contained in dry powder and wet bead samples of microcrystalline cellulose, MCC, (Avicel PH-101), taken from various stages of the extrusion/marumerization process used to make beads and to determine the kinetic rates of water release from each sample. These samples were allowed to equilibrate in controlled humidity chambers at 25 degrees C. The total amount of water in each sample, after equilibration, was determined by thermogravimetric analysis (TGA) as a function of temperature. The rates of water release from these samples were determined by using isothermal gravimetric analysis (ITGA) as a function of time. Analysis of the results for these studies suggest that water was released from these systems by several different kinetic mechanisms. The water release mechanisms for these systems include: zero order, second order, and diffusion controlled kinetics. It is believed that all three kinetic mechanisms will occur at the same time, however; only one mechanism will be prominent. The prominent mechanism was based on the amount of water present in the sample.
Carro, N; García, I; Ignacio, M-C; Llompart, M; Yebra, M-C; Mouteira, A
2002-10-01
A sample-preparation procedure (extraction and saponification) using microwave energy is proposed for determination of organochlorine pesticides in oyster samples. A Plackett-Burman factorial design has been used to optimize the microwave-assisted extraction and mild saponification on a freeze dried sample spiked with a mixture of aldrin, endrin, dieldrin, heptachlor, heptachorepoxide, isodrin, transnonachlor, p, p'-DDE, and p, p'-DDD. Six variables: solvent volume, extraction time, extraction temperature, amount of acetone (%) in the extractant solvent, amount of sample, and volume of NaOH solution were considered in the optimization process. The results show that the amount of sample is statistically significant for dieldrin, aldrin, p, p'-DDE, heptachlor, and transnonachlor and solvent volume for dieldrin, aldrin, and p, p'-DDE. The volume of NaOH solution is statistically significant for aldrin and p, p'-DDE only. Extraction temperature and extraction time seem to be the main factors determining the efficiency of extraction process for isodrin and p, p'-DDE, respectively. The optimized procedure was compared with conventional Soxhlet extraction.
Effects of tongue cleaning on bacterial flora in tongue coating and dental plaque: a crossover study
2014-01-01
Background The effects of tongue cleaning on reconstruction of bacterial flora in dental plaque and tongue coating itself are obscure. We assessed changes in the amounts of total bacteria as well as Fusobacterium nucleatum in tongue coating and dental plaque specimens obtained with and without tongue cleaning. Methods We conducted a randomized examiner-blind crossover study using 30 volunteers (average 23.7 ± 3.2 years old) without periodontitis. After dividing randomly into 2 groups, 1 group was instructed to clean the tongue, while the other did not. On days 1 (baseline), 3, and 10, tongue coating and dental plaque samples were collected after recording tongue coating score (Winkel tongue coating index: WTCI). After a washout period of 3 weeks, the same examinations were performed with the subjects allocated to the alternate group. Genomic DNA was purified from the samples and applied to SYBR® Green-based real-time PCR to quantify the amounts of total bacteria and F. nucleatum. Results After 3 days, the WTCI score recovered to baseline, though the amount of total bacteria in tongue coating was significantly lower as compared to the baseline. In plaque samples, the bacterial amounts on day 3 and 10 were significantly lower than the baseline with and without tongue cleaning. Principal component analysis showed that variations of bacterial amounts in the tongue coating and dental plaque samples were independent from each other. Furthermore, we found a strong association between amounts of total bacteria and F. nucleatum in specimens both. Conclusions Tongue cleaning reduced the amount of bacteria in tongue coating. However, the cleaning had no obvious contribution to inhibit dental plaque formation. Furthermore, recovery of the total bacterial amount induced an increase in F. nucleatum in both tongue coating and dental plaque. Thus, it is recommended that tongue cleaning and tooth brushing should both be performed for promoting oral health. PMID:24423407
Matsui, Miki; Chosa, Naoyuki; Shimoyama, Yu; Minami, Kentaro; Kimura, Shigenobu; Kishi, Mitsuo
2014-01-14
The effects of tongue cleaning on reconstruction of bacterial flora in dental plaque and tongue coating itself are obscure. We assessed changes in the amounts of total bacteria as well as Fusobacterium nucleatum in tongue coating and dental plaque specimens obtained with and without tongue cleaning. We conducted a randomized examiner-blind crossover study using 30 volunteers (average 23.7 ± 3.2 years old) without periodontitis. After dividing randomly into 2 groups, 1 group was instructed to clean the tongue, while the other did not. On days 1 (baseline), 3, and 10, tongue coating and dental plaque samples were collected after recording tongue coating score (Winkel tongue coating index: WTCI). After a washout period of 3 weeks, the same examinations were performed with the subjects allocated to the alternate group. Genomic DNA was purified from the samples and applied to SYBR® Green-based real-time PCR to quantify the amounts of total bacteria and F. nucleatum. After 3 days, the WTCI score recovered to baseline, though the amount of total bacteria in tongue coating was significantly lower as compared to the baseline. In plaque samples, the bacterial amounts on day 3 and 10 were significantly lower than the baseline with and without tongue cleaning. Principal component analysis showed that variations of bacterial amounts in the tongue coating and dental plaque samples were independent from each other. Furthermore, we found a strong association between amounts of total bacteria and F. nucleatum in specimens both. Tongue cleaning reduced the amount of bacteria in tongue coating. However, the cleaning had no obvious contribution to inhibit dental plaque formation. Furthermore, recovery of the total bacterial amount induced an increase in F. nucleatum in both tongue coating and dental plaque. Thus, it is recommended that tongue cleaning and tooth brushing should both be performed for promoting oral health.
Electrostatic sampling of trace DNA from clothing.
Zieger, Martin; Defaux, Priscille Merciani; Utz, Silvia
2016-05-01
During acts of physical aggression, offenders frequently come into contact with clothes of the victim, thereby leaving traces of DNA-bearing biological material on the garments. Since tape-lifting and swabbing, the currently established methods for non-destructive trace DNA sampling from clothing, both have their shortcomings in collection efficiency and handling, we thought about a new collection method for these challenging samples. Testing two readily available electrostatic devices for their potential to sample biological material from garments made of different fabrics, we found one of them, the electrostatic dust print lifter (DPL), to perform comparable to well-established sampling with wet cotton swabs. In simulated aggression scenarios, we had the same success rate for the establishment of single aggressor profiles, suitable for database submission, with both the DPL and wet swabbing. However, we lost a substantial amount of information with electrostatic sampling, since almost no mixed aggressor-victim profiles suitable for database entry could be established, compared to conventional swabbing. This study serves as a proof of principle for electrostatic DNA sampling from items of clothing. The technique still requires optimization before it might be used in real casework. But we are confident that in the future it could be an efficient and convenient contribution to the toolbox of forensic practitioners.
2012-01-01
Background High quality program data is critical for managing, monitoring, and evaluating national HIV treatment programs. By 2009, the Malawi Ministry of Health had initiated more than 270,000 patients on HIV treatment at 377 sites. Quarterly supervision of these antiretroviral therapy (ART) sites ensures high quality care, but the time currently dedicated to exhaustive record review and data cleaning detracts from other critical components. The exhaustive record review is unlikely to be sustainable long term because of the resources required and increasing number of patients on ART. This study quantifies the current levels of data quality and evaluates Lot Quality Assurance Sampling (LQAS) as a tool to prioritize sites with low data quality, thus lowering costs while maintaining sufficient quality for program monitoring and patient care. Methods In January 2010, a study team joined supervision teams at 19 sites purposely selected to reflect the variety of ART sites. During the exhaustive data review, the time allocated to data cleaning and data discrepancies were documented. The team then randomly sampled 76 records from each site, recording secondary outcomes and the time required for sampling. Results At the 19 sites, only 1.2% of records had discrepancies in patient outcomes and 0.4% in treatment regimen. However, data cleaning took 28.5 hours in total, suggesting that data cleaning for all 377 ART sites would require over 350 supervision-hours quarterly. The LQAS tool accurately identified the sites with the low data quality, reduced the time for data cleaning by 70%, and allowed for reporting on secondary outcomes. Conclusions Most sites maintained high quality records. In spite of this, data cleaning required significant amounts of time with little effect on program estimates of patient outcomes. LQAS conserves resources while maintaining sufficient data quality for program assessment and management to allow for quality patient care. PMID:22776745
Hedt-Gauthier, Bethany L; Tenthani, Lyson; Mitchell, Shira; Chimbwandira, Frank M; Makombe, Simon; Chirwa, Zengani; Schouten, Erik J; Pagano, Marcello; Jahn, Andreas
2012-07-09
High quality program data is critical for managing, monitoring, and evaluating national HIV treatment programs. By 2009, the Malawi Ministry of Health had initiated more than 270,000 patients on HIV treatment at 377 sites. Quarterly supervision of these antiretroviral therapy (ART) sites ensures high quality care, but the time currently dedicated to exhaustive record review and data cleaning detracts from other critical components. The exhaustive record review is unlikely to be sustainable long term because of the resources required and increasing number of patients on ART. This study quantifies the current levels of data quality and evaluates Lot Quality Assurance Sampling (LQAS) as a tool to prioritize sites with low data quality, thus lowering costs while maintaining sufficient quality for program monitoring and patient care. In January 2010, a study team joined supervision teams at 19 sites purposely selected to reflect the variety of ART sites. During the exhaustive data review, the time allocated to data cleaning and data discrepancies were documented. The team then randomly sampled 76 records from each site, recording secondary outcomes and the time required for sampling. At the 19 sites, only 1.2% of records had discrepancies in patient outcomes and 0.4% in treatment regimen. However, data cleaning took 28.5 hours in total, suggesting that data cleaning for all 377 ART sites would require over 350 supervision-hours quarterly. The LQAS tool accurately identified the sites with the low data quality, reduced the time for data cleaning by 70%, and allowed for reporting on secondary outcomes. Most sites maintained high quality records. In spite of this, data cleaning required significant amounts of time with little effect on program estimates of patient outcomes. LQAS conserves resources while maintaining sufficient data quality for program assessment and management to allow for quality patient care.
Hou, Sen; Sun, Lili; Wieczorek, Stefan A; Kalwarczyk, Tomasz; Kaminski, Tomasz S; Holyst, Robert
2014-01-15
Fluorescent double-stranded DNA (dsDNA) molecules labeled at both ends are commonly produced by annealing of complementary single-stranded DNA (ssDNA) molecules, labeled with fluorescent dyes at the same (3' or 5') end. Because the labeling efficiency of ssDNA is smaller than 100%, the resulting dsDNA have two, one or are without a dye. Existing methods are insufficient to measure the percentage of the doubly-labeled dsDNA component in the fluorescent DNA sample and it is even difficult to distinguish the doubly-labeled DNA component from the singly-labeled component. Accurate measurement of the percentage of such doubly labeled dsDNA component is a critical prerequisite for quantitative biochemical measurements, which has puzzled scientists for decades. We established a fluorescence correlation spectroscopy (FCS) system to measure the percentage of doubly labeled dsDNA (PDL) in the total fluorescent dsDNA pool. The method is based on comparative analysis of the given sample and a reference dsDNA sample prepared by adding certain amount of unlabeled ssDNA into the original ssDNA solution. From FCS autocorrelation functions, we obtain the number of fluorescent dsDNA molecules in the focal volume of the confocal microscope and PDL. We also calculate the labeling efficiency of ssDNA. The method requires minimal amount of material. The samples have the concentration of DNA in the nano-molar/L range and the volume of tens of microliters. We verify our method by using restriction enzyme Hind III to cleave the fluorescent dsDNA. The kinetics of the reaction depends strongly on PDL, a critical parameter for quantitative biochemical measurements. Copyright © 2013 Elsevier B.V. All rights reserved.
Lipid peroxidation and antioxidants status in human malignant and non-malignant thyroid tumours.
Stanley, J A; Neelamohan, R; Suthagar, E; Vengatesh, G; Jayakumar, J; Chandrasekaran, M; Banu, S K; Aruldhas, M M
2016-06-01
Thyroid epithelial cells produce moderate amounts of reactive oxygen species that are physiologically required for thyroid hormone synthesis. Nevertheless, when they are produced in excessive amounts, they may become toxic. The present study is aimed to compare the lipid peroxidation (LPO), antioxidant enzymes - superoxide dismutase (SOD), catalase (CAT), glutathione peroxidase (GPx) and non-protein thiols (reduced glutathione (GSH)) in human thyroid tissues with malignant and non-malignant disorders. The study used human thyroid tissues and blood samples from 157 women (147 diseased and 10 normal). Thyroid hormones, oxidative stress markers and antioxidants were estimated by standard methods. LPO significantly increased in most of the papillary thyroid carcinoma (PTC: 82.9%) and follicular thyroid adenoma (FTA: 72.9%) tissues, whilst in a majority of nodular goitre (69.2%) and Hashimoto's thyroiditis (HT: 73.7%) thyroid tissues, it remained unaltered. GSH increased in PTC (55.3%), remained unaltered in FTA (97.3%) and all other goiter samples studied. SOD increased in PTC (51.1%) and all other malignant thyroid tissues studied. CAT remained unaltered in PTC (95.7%), FTA (97.3%) and all other non-malignant samples (HT, MNG, TMNG) studied. GPx increased in PTC (63.8%), all other malignant thyroid tissues and remained unaltered in many of the FTA (91.9%) tissues and all other non-malignant samples (HT, MNG, TMNG) studied. In the case of non-malignant thyroid tumours, the oxidant-antioxidant balance was undisturbed, whilst in malignant tumours the balance was altered, and the change in r value observed in the LPO and SOD pairs between normal and PTC tissues and also in many pairs with multi-nodular goitre (MNG)/toxic MNG tissues may be used as a marker to differentiate/detect different malignant/non-malignant thyroid tumours. © The Author(s) 2015.
Irrigation Requirement Estimation Using Vegetation Indices and Inverse Biophysical Modeling
NASA Technical Reports Server (NTRS)
Bounoua, Lahouari; Imhoff, Marc L.; Franks, Shannon
2010-01-01
We explore an inverse biophysical modeling process forced by satellite and climatological data to quantify irrigation requirements in semi-arid agricultural areas. We constrain the carbon and water cycles modeled under both equilibrium, balance between vegetation and climate, and non-equilibrium, water added through irrigation. We postulate that the degree to which irrigated dry lands vary from equilibrium climate conditions is related to the amount of irrigation. The amount of water required over and above precipitation is considered as an irrigation requirement. For July, results show that spray irrigation resulted in an additional amount of water of 1.3 mm per occurrence with a frequency of 24.6 hours. In contrast, the drip irrigation required only 0.6 mm every 45.6 hours or 46% of that simulated by the spray irrigation. The modeled estimates account for 87% of the total reported irrigation water use, when soil salinity is not important and 66% in saline lands.
46 CFR 76.15-5 - Quantity, pipe sizes, and discharge rate.
Code of Federal Regulations, 2010 CFR
2010-10-01
... PROTECTION EQUIPMENT Carbon Dioxide Extinguishing Systems, Details § 76.15-5 Quantity, pipe sizes, and discharge rate. (a) General. The amount of carbon dioxide required for each space shall be as determined by... the purpose of determining the amount of carbon dioxide required, a cargo compartment will be...
46 CFR 76.15-5 - Quantity, pipe sizes, and discharge rate.
Code of Federal Regulations, 2012 CFR
2012-10-01
... PROTECTION EQUIPMENT Carbon Dioxide Extinguishing Systems, Details § 76.15-5 Quantity, pipe sizes, and discharge rate. (a) General. The amount of carbon dioxide required for each space shall be as determined by... the purpose of determining the amount of carbon dioxide required, a cargo compartment will be...
46 CFR 76.15-5 - Quantity, pipe sizes, and discharge rate.
Code of Federal Regulations, 2014 CFR
2014-10-01
... PROTECTION EQUIPMENT Carbon Dioxide Extinguishing Systems, Details § 76.15-5 Quantity, pipe sizes, and discharge rate. (a) General. The amount of carbon dioxide required for each space shall be as determined by... the purpose of determining the amount of carbon dioxide required, a cargo compartment will be...
46 CFR 76.15-5 - Quantity, pipe sizes, and discharge rate.
Code of Federal Regulations, 2013 CFR
2013-10-01
... PROTECTION EQUIPMENT Carbon Dioxide Extinguishing Systems, Details § 76.15-5 Quantity, pipe sizes, and discharge rate. (a) General. The amount of carbon dioxide required for each space shall be as determined by... the purpose of determining the amount of carbon dioxide required, a cargo compartment will be...
24 CFR 242.23 - Maximum mortgage amounts and cash equity requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Maximum mortgage amounts and cash equity requirements. 242.23 Section 242.23 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR HOUSING-FEDERAL HOUSING...
34 CFR Appendix C to Part 379 - Calculating Required Matching Amount
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 2 2010-07-01 2010-07-01 false Calculating Required Matching Amount C Appendix C to Part 379 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF SPECIAL EDUCATION AND REHABILITATIVE SERVICES, DEPARTMENT OF EDUCATION PROJECTS WITH INDUSTRY Pt. 379, App...
Fluid sampling apparatus and method
Yeamans, D.R.
1998-02-03
Incorporation of a bellows in a sampling syringe eliminates ingress of contaminants, permits replication of amounts and compression of multiple sample injections, and enables remote sampling for off-site analysis. 3 figs.
Su, Xiaoquan; Wang, Xuetao; Jing, Gongchao; Ning, Kang
2014-04-01
The number of microbial community samples is increasing with exponential speed. Data-mining among microbial community samples could facilitate the discovery of valuable biological information that is still hidden in the massive data. However, current methods for the comparison among microbial communities are limited by their ability to process large amount of samples each with complex community structure. We have developed an optimized GPU-based software, GPU-Meta-Storms, to efficiently measure the quantitative phylogenetic similarity among massive amount of microbial community samples. Our results have shown that GPU-Meta-Storms would be able to compute the pair-wise similarity scores for 10 240 samples within 20 min, which gained a speed-up of >17 000 times compared with single-core CPU, and >2600 times compared with 16-core CPU. Therefore, the high-performance of GPU-Meta-Storms could facilitate in-depth data mining among massive microbial community samples, and make the real-time analysis and monitoring of temporal or conditional changes for microbial communities possible. GPU-Meta-Storms is implemented by CUDA (Compute Unified Device Architecture) and C++. Source code is available at http://www.computationalbioenergy.org/meta-storms.html.
Method and apparatus for measuring the gas permeability of a solid sample
Carstens, D.H.W.
1984-01-27
The disclosure is directed to an apparatus and method for measuring the permeability of a gas in a sample. The gas is allowed to reach a steady flow rate through the sample. A measurable amount of the gas is collected during a given time period and then delivered to a sensitive quadrupole. The quadrupole signal, adjusted for background, is proportional to the amount of gas collected during the time period. The quadrupole can be calibrated with a standard helium leak. The gas can be deuterium and the sample can be polyvinyl alcohol.
Principles of gene microarray data analysis.
Mocellin, Simone; Rossi, Carlo Riccardo
2007-01-01
The development of several gene expression profiling methods, such as comparative genomic hybridization (CGH), differential display, serial analysis of gene expression (SAGE), and gene microarray, together with the sequencing of the human genome, has provided an opportunity to monitor and investigate the complex cascade of molecular events leading to tumor development and progression. The availability of such large amounts of information has shifted the attention of scientists towards a nonreductionist approach to biological phenomena. High throughput technologies can be used to follow changing patterns of gene expression over time. Among them, gene microarray has become prominent because it is easier to use, does not require large-scale DNA sequencing, and allows for the parallel quantification of thousands of genes from multiple samples. Gene microarray technology is rapidly spreading worldwide and has the potential to drastically change the therapeutic approach to patients affected with tumor. Therefore, it is of paramount importance for both researchers and clinicians to know the principles underlying the analysis of the huge amount of data generated with microarray technology.
NASA Astrophysics Data System (ADS)
Valkiers, S.; Ding, T.; Inkret, M.; Ruße, K.; Taylor, P.
2005-04-01
A new 2 kg batch of SiO2 crystals, IRMM-018a as well as the existing NBS28 silica sand (or RM 8546, obtained by I. Friedman from U.S. Geological Survey) have been characterised for their "absolute" silicon isotope composition and molar mass. The amount-of-substance measurements needed for that purpose were performed on the IRMM amount comparator (Avogadro II) on samples from these batches, which were converted to gaseous silicon tetra-fluoride (SiF4). The isotope amount ratio measurements were calibrated by means of synthesized isotope amount ratios realized in the form of synthetic Si isotope mixtures, the measurement procedure of which makes them SI-traceable. IRMM-018a is intended to be used as Isotope Reference Material for isotope amount measurements in geochemical and other isotope abundance studies of silicon. It is distributed in samples of about 0.1 mol and will replace IRMM-018 (exhausted).
Tripathi, Ashish; McNulty, Ian; Shpyrko, Oleg G
2014-01-27
Ptychographic coherent x-ray diffractive imaging is a form of scanning microscopy that does not require optics to image a sample. A series of scanned coherent diffraction patterns recorded from multiple overlapping illuminated regions on the sample are inverted numerically to retrieve its image. The technique recovers the phase lost by detecting the diffraction patterns by using experimentally known constraints, in this case the measured diffraction intensities and the assumed scan positions on the sample. The spatial resolution of the recovered image of the sample is limited by the angular extent over which the diffraction patterns are recorded and how well these constraints are known. Here, we explore how reconstruction quality degrades with uncertainties in the scan positions. We show experimentally that large errors in the assumed scan positions on the sample can be numerically determined and corrected using conjugate gradient descent methods. We also explore in simulations the limits, based on the signal to noise of the diffraction patterns and amount of overlap between adjacent scan positions, of just how large these errors can be and still be rendered tractable by this method.
Current developments in forensic interpretation of mixed DNA samples (Review).
Hu, Na; Cong, Bin; Li, Shujin; Ma, Chunling; Fu, Lihong; Zhang, Xiaojing
2014-05-01
A number of recent improvements have provided contemporary forensic investigations with a variety of tools to improve the analysis of mixed DNA samples in criminal investigations, producing notable improvements in the analysis of complex trace samples in cases of sexual assult and homicide. Mixed DNA contains DNA from two or more contributors, compounding DNA analysis by combining DNA from one or more major contributors with small amounts of DNA from potentially numerous minor contributors. These samples are characterized by a high probability of drop-out or drop-in combined with elevated stutter, significantly increasing analysis complexity. At some loci, minor contributor alleles may be completely obscured due to amplification bias or over-amplification, creating the illusion of additional contributors. Thus, estimating the number of contributors and separating contributor genotypes at a given locus is significantly more difficult in mixed DNA samples, requiring the application of specialized protocols that have only recently been widely commercialized and standardized. Over the last decade, the accuracy and repeatability of mixed DNA analyses available to conventional forensic laboratories has greatly advanced in terms of laboratory technology, mathematical models and biostatistical software, generating more accurate, rapid and readily available data for legal proceedings and criminal cases.
Current developments in forensic interpretation of mixed DNA samples (Review)
HU, NA; CONG, BIN; LI, SHUJIN; MA, CHUNLING; FU, LIHONG; ZHANG, XIAOJING
2014-01-01
A number of recent improvements have provided contemporary forensic investigations with a variety of tools to improve the analysis of mixed DNA samples in criminal investigations, producing notable improvements in the analysis of complex trace samples in cases of sexual assult and homicide. Mixed DNA contains DNA from two or more contributors, compounding DNA analysis by combining DNA from one or more major contributors with small amounts of DNA from potentially numerous minor contributors. These samples are characterized by a high probability of drop-out or drop-in combined with elevated stutter, significantly increasing analysis complexity. At some loci, minor contributor alleles may be completely obscured due to amplification bias or over-amplification, creating the illusion of additional contributors. Thus, estimating the number of contributors and separating contributor genotypes at a given locus is significantly more difficult in mixed DNA samples, requiring the application of specialized protocols that have only recently been widely commercialized and standardized. Over the last decade, the accuracy and repeatability of mixed DNA analyses available to conventional forensic laboratories has greatly advanced in terms of laboratory technology, mathematical models and biostatistical software, generating more accurate, rapid and readily available data for legal proceedings and criminal cases. PMID:24748965
Biochemical surface modification of Co-Cr-Mo.
Puleo, D A
1996-01-01
Because of the limited mechanical properties of tissue substitutes formed by culturing cells on polymeric scaffolds, other approaches to tissue engineering must be explored for applications that require complete and immediate ability to bear weight, e.g. total joint replacements. Biochemical surface modification offers a way to partially regulate events at the bone-implant interface to obtain preferred tissue responses. Tresyl chloride, gamma-aminopropyltriethoxysilane (APS) and p-nitrophenyl chloroformate (p-NPC) immobilization schemes were used to couple a model enzyme, trypsin, on bulk samples of Co-Cr-Mo. For comparison, samples were simply adsorbed with protein. The three derivatization schemes resulted in different patterns and levels of activity. Tresyl chloride was not effective in immobilizing active enzyme on Co-Cr-Mo. Aqueous silanization with 12.5% APS resulted in optimal immobilized activity. Activity on samples derivatized with 0.65 mg p-NPC cm-2 was four to five times greater than that on samples simple adsorbed with enzyme or optimally derivatized with APS and was about eight times that on tresylated samples. This work demonstrates that, although different methods have different effectiveness, chemical derivatization can be used to alter the amount and/or stability of biomolecules immobilized on the surface of Co-Cr-Mo.
Chung, K Y; Carter, G J; Stancliffe, J D
1999-02-01
A new European/International Standard (ISOprEN 10882-1) on the sampling of airborne particulates generated during welding and allied processes has been proposed. The use of a number of samplers and sampling procedures is allowable within the defined protocol. The influence of these variables on welding fume exposures measured during welding and grinding of stainless and mild steel using the gas metal arc (GMA) and flux-cored arc (FCA) and GMA welding of aluminium has been examined. Results show that use of any of the samplers will not give significantly different measured exposures. The effect on exposure measurement of placing the samplers on either side of the head was variable; consequently, sampling position cannot be meaningfully defined. All samplers collected significant amounts of grinding dust. Therefore, gravimetric determination of welding fume exposure in atmospheres containing grinding dust will be inaccurate. The use of a new size selective sampler can, to some extent, be used to give a more accurate estimate of exposure. The reliability of fume analysis data of welding consumables has caused concern; and the reason for differences that existed between the material safety data sheet and the analysis of fume samples collected requires further investigation.
Farhoud, Murtada H; Wessels, Hans J C T; Wevers, Ron A; van Engelen, Baziel G; van den Heuvel, Lambert P; Smeitink, Jan A
2005-01-01
In 2D-based comparative proteomics of scarce samples, such as limited patient material, established methods for prefractionation and subsequent use of different narrow range IPG strips to increase overall resolution are difficult to apply. Also, a high number of samples, a prerequisite for drawing meaningful conclusions when pathological and control samples are considered, will increase the associated amount of work almost exponentially. Here, we introduce a novel, effective, and economic method designed to obtain maximum 2D resolution while maintaining the high throughput necessary to perform large-scale comparative proteomics studies. The method is based on connecting different IPG strips serially head-to-tail so that a complete line of different IPG strips with sequential pH regions can be focused in the same experiment. We show that when 3 IPG strips (covering together the pH range of 3-11) are connected head-to-tail an optimal resolution is achieved along the whole pH range. Sample consumption, time required, and associated costs are reduced by almost 70%, and the workload is reduced significantly.
Implementation of Wi-Fi Signal Sampling on an Android Smartphone for Indoor Positioning Systems.
Liu, Hung-Huan; Liu, Chun
2017-12-21
Collecting and maintaining radio fingerprint for wireless indoor positioning systems involves considerable time and labor. We have proposed the quick radio fingerprint collection (QRFC) algorithm which employed the built-in accelerometer of Android smartphones to implement step detection in order to assist in collecting radio fingerprints. In the present study, we divided the algorithm into moving sampling (MS) and stepped MS (SMS), and describe the implementation of both algorithms and their comparison. Technical details and common errors concerning the use of Android smartphones to collect Wi-Fi radio beacons were surveyed and discussed. The results of signal sampling experiments performed in a hallway measuring 54 m in length showed that in terms of the amount of time required to complete collection of access point (AP) signals, static sampling (SS; a traditional procedure for collecting Wi-Fi signals) took at least 2 h, whereas MS and SMS took approximately 150 and 300 s, respectively. Notably, AP signals obtained through MS and SMS were comparable to those obtained through SS in terms of the distribution of received signal strength indicator (RSSI) and positioning accuracy. Therefore, MS and SMS are recommended instead of SS as signal sampling procedures for indoor positioning algorithms.
Implementation of Wi-Fi Signal Sampling on an Android Smartphone for Indoor Positioning Systems
Liu, Chun
2017-01-01
Collecting and maintaining radio fingerprint for wireless indoor positioning systems involves considerable time and labor. We have proposed the quick radio fingerprint collection (QRFC) algorithm which employed the built-in accelerometer of Android smartphones to implement step detection in order to assist in collecting radio fingerprints. In the present study, we divided the algorithm into moving sampling (MS) and stepped MS (SMS), and describe the implementation of both algorithms and their comparison. Technical details and common errors concerning the use of Android smartphones to collect Wi-Fi radio beacons were surveyed and discussed. The results of signal sampling experiments performed in a hallway measuring 54 m in length showed that in terms of the amount of time required to complete collection of access point (AP) signals, static sampling (SS; a traditional procedure for collecting Wi-Fi signals) took at least 2 h, whereas MS and SMS took approximately 150 and 300 s, respectively. Notably, AP signals obtained through MS and SMS were comparable to those obtained through SS in terms of the distribution of received signal strength indicator (RSSI) and positioning accuracy. Therefore, MS and SMS are recommended instead of SS as signal sampling procedures for indoor positioning algorithms. PMID:29267234
Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods
Grossman, Mark W.; George, William A.
1987-01-01
A process for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H.sub.2 O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg.sub.2 Cl.sub.2. The method for doing this involves dissolving a precise amount of Hg.sub.2 Cl.sub.2 in an electrolyte solution comprised of concentrated HCl and H.sub.2 O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg.
Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods
Grossman, M.W.; George, W.A.
1987-07-07
A process is described for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H[sub 2]O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg[sub 2]Cl[sub 2]. The method for doing this involves dissolving a precise amount of Hg[sub 2]Cl[sub 2] in an electrolyte solution comprised of concentrated HCl and H[sub 2]O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg. 1 fig.
Wastewater Biosolid Composting Optimization Based on UV-VNIR Spectroscopy Monitoring
Temporal-Lara, Beatriz; Melendez-Pastor, Ignacio; Gómez, Ignacio; Navarro-Pedreño, Jose
2016-01-01
Conventional wastewater treatment generates large amounts of organic matter–rich sludge that requires adequate treatment to avoid public health and environmental problems. The mixture of wastewater sludge and some bulking agents produces a biosolid to be composted at adequate composting facilities. The composting process is chemically and microbiologically complex and requires an adequate aeration of the biosolid (e.g., with a turner machine) for proper maturation of the compost. Adequate (near) real-time monitoring of the compost maturity process is highly difficult and the operation of composting facilities is not as automatized as other industrial processes. Spectroscopic analysis of compost samples has been successfully employed for compost maturity assessment but the preparation of the solid compost samples is difficult and time-consuming. This manuscript presents a methodology based on a combination of a less time-consuming compost sample preparation and ultraviolet, visible and short-wave near-infrared spectroscopy. Spectroscopic measurements were performed with liquid compost extract instead of solid compost samples. Partial least square (PLS) models were developed to quantify chemical fractions commonly employed for compost maturity assessment. Effective regression models were obtained for total organic matter (residual predictive deviation—RPD = 2.68), humification ratio (RPD = 2.23), total exchangeable carbon (RPD = 2.07) and total organic carbon (RPD = 1.66) with a modular and cost-effective visible and near infrared (VNIR) spectroradiometer. This combination of a less time-consuming compost sample preparation with a versatile sensor system provides an easy-to-implement, efficient and cost-effective protocol for compost maturity assessment and near-real-time monitoring. PMID:27854280
Wastewater Biosolid Composting Optimization Based on UV-VNIR Spectroscopy Monitoring.
Temporal-Lara, Beatriz; Melendez-Pastor, Ignacio; Gómez, Ignacio; Navarro-Pedreño, Jose
2016-11-15
Conventional wastewater treatment generates large amounts of organic matter-rich sludge that requires adequate treatment to avoid public health and environmental problems. The mixture of wastewater sludge and some bulking agents produces a biosolid to be composted at adequate composting facilities. The composting process is chemically and microbiologically complex and requires an adequate aeration of the biosolid (e.g., with a turner machine) for proper maturation of the compost. Adequate (near) real-time monitoring of the compost maturity process is highly difficult and the operation of composting facilities is not as automatized as other industrial processes. Spectroscopic analysis of compost samples has been successfully employed for compost maturity assessment but the preparation of the solid compost samples is difficult and time-consuming. This manuscript presents a methodology based on a combination of a less time-consuming compost sample preparation and ultraviolet, visible and short-wave near-infrared spectroscopy. Spectroscopic measurements were performed with liquid compost extract instead of solid compost samples. Partial least square (PLS) models were developed to quantify chemical fractions commonly employed for compost maturity assessment. Effective regression models were obtained for total organic matter (residual predictive deviation-RPD = 2.68), humification ratio (RPD = 2.23), total exchangeable carbon (RPD = 2.07) and total organic carbon (RPD = 1.66) with a modular and cost-effective visible and near infrared (VNIR) spectroradiometer. This combination of a less time-consuming compost sample preparation with a versatile sensor system provides an easy-to-implement, efficient and cost-effective protocol for compost maturity assessment and near-real-time monitoring.
Model-based Bayesian inference for ROC data analysis
NASA Astrophysics Data System (ADS)
Lei, Tianhu; Bae, K. Ty
2013-03-01
This paper presents a study of model-based Bayesian inference to Receiver Operating Characteristics (ROC) data. The model is a simple version of general non-linear regression model. Different from Dorfman model, it uses a probit link function with a covariate variable having zero-one two values to express binormal distributions in a single formula. Model also includes a scale parameter. Bayesian inference is implemented by Markov Chain Monte Carlo (MCMC) method carried out by Bayesian analysis Using Gibbs Sampling (BUGS). Contrast to the classical statistical theory, Bayesian approach considers model parameters as random variables characterized by prior distributions. With substantial amount of simulated samples generated by sampling algorithm, posterior distributions of parameters as well as parameters themselves can be accurately estimated. MCMC-based BUGS adopts Adaptive Rejection Sampling (ARS) protocol which requires the probability density function (pdf) which samples are drawing from be log concave with respect to the targeted parameters. Our study corrects a common misconception and proves that pdf of this regression model is log concave with respect to its scale parameter. Therefore, ARS's requirement is satisfied and a Gaussian prior which is conjugate and possesses many analytic and computational advantages is assigned to the scale parameter. A cohort of 20 simulated data sets and 20 simulations from each data set are used in our study. Output analysis and convergence diagnostics for MCMC method are assessed by CODA package. Models and methods by using continuous Gaussian prior and discrete categorical prior are compared. Intensive simulations and performance measures are given to illustrate our practice in the framework of model-based Bayesian inference using MCMC method.
Rezaiimofrad, M; Rangraz Jeddi, F; Azarbad, Z
2013-03-01
Bread is a valuable source of proteins, minerals and calories. Baking soda prevents the absorption and digestion of bread and more salt used in production of bread also causes different diseases. This study was conducted to determine the amount of soda and salt in bakeries. Cross-sectional descriptive study was carried out on 50 bakeries district during 2009. 400 samples were collected in four steps randomly. The standard PH < 6.2 indicative of no consumption of baking soda in bread and salt less than 2 g/100 g was considered as the reference. The PH less than 6.2 was seen in 91.5% of samples and analyzed by random effect analysis. In 64.5% of samples, the amount of salt was more than the standard. The amount of baking soda used in the bakeries was not high; bakers either had no enough knowledge about the amount of salt or had more other reasons. Drastic measures are recommended.
Observations Regarding Use of Advanced CFD Analysis, Sensitivity Analysis, and Design Codes in MDO
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Hou, Gene J. W.; Taylor, Arthur C., III
1996-01-01
Observations regarding the use of advanced computational fluid dynamics (CFD) analysis, sensitivity analysis (SA), and design codes in gradient-based multidisciplinary design optimization (MDO) reflect our perception of the interactions required of CFD and our experience in recent aerodynamic design optimization studies using CFD. Sample results from these latter studies are summarized for conventional optimization (analysis - SA codes) and simultaneous analysis and design optimization (design code) using both Euler and Navier-Stokes flow approximations. The amount of computational resources required for aerodynamic design using CFD via analysis - SA codes is greater than that required for design codes. Thus, an MDO formulation that utilizes the more efficient design codes where possible is desired. However, in the aerovehicle MDO problem, the various disciplines that are involved have different design points in the flight envelope; therefore, CFD analysis - SA codes are required at the aerodynamic 'off design' points. The suggested MDO formulation is a hybrid multilevel optimization procedure that consists of both multipoint CFD analysis - SA codes and multipoint CFD design codes that perform suboptimizations.
29 CFR 4.142 - Contracts in an indefinite amount.
Code of Federal Regulations, 2010 CFR
2010-07-01
... McNamara-O'Hara Service Contract Act Determining Amount of Contract § 4.142 Contracts in an indefinite amount. (a) Every contract subject to this Act which is indefinite in amount is required to contain the....), a case arising under the Walsh-Healey Public Contracts Act. Such a contract, which may be in the...
24 CFR 570.705 - Loan requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... year in an aggregate amount equal to 50 percent of the amount approved in an appropriation act for that... with the private sector financing of the debt obligations. Such costs are payable out of the guaranteed... the public entity would thereby exceed an amount equal to five times the amount of the most recent...
17 CFR 210.6-03 - Special rules of general application to registered investment companies.
Code of Federal Regulations, 2011 CFR
2011-04-01
... of assets. The balance sheets of registered investment companies, other than issuers of face-amount.... As required by section 28(b) of the Investment Company Act of 1940, qualified assets of face-amount... outstanding face-amount certificates. If the nature of the qualifying assets and amount thereof are not...
Lin, Meng-Hsien; Anderson, Jonathan; Pinnaratip, Rattapol; Meng, Hao; Konst, Shari; DeRouin, Andrew J.; Rajachar, Rupak
2015-01-01
The degradation behavior of a tissue adhesive is critical to its ability to repair a wound while minimizing prolonged inflammatory response. Traditional degradation tests can be expensive to perform, as they require large numbers of samples. The potential for using magnetoelastic resonant sensors to track bioadhesive degradation behavior was investigated. Specifically, biomimetic poly(ethylene glycol)- (PEG-) based adhesive was coated onto magnetoelastic (ME) sensor strips. Adhesive-coated samples were submerged in solutions buffered at multiple pH levels (5.7, 7.4 and 10.0) at body temperature (37°C) and the degradation behavior of the adhesive was tracked wirelessly by monitoring the changes in the resonant amplitude of the sensors for over 80 days. Adhesive incubated at pH 7.4 degraded over 75 days, which matched previously published data for bulk degradation behavior of the adhesive while utilizing significantly less material (~103 times lower). Adhesive incubated at pH 10.0 degraded within 25 days while samples incubated at pH 5.7 did not completely degrade even after 80 days of incubation. As expected, the rate of degradation increased with increasing pH as the rate of ester bond hydrolysis is higher under basic conditions. As a result of requiring a significantly lower amount of samples compared to traditional methods, the ME sensing technology is highly attractive for fully characterizing the degradation behavior of tissue adhesives in a wide range of physiological conditions. PMID:26087077
da Cunha Santos, G; Saieg, M A; Troncone, G; Zeppa, P
2018-04-01
Minimally invasive procedures such as endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) must yield not only good quality and quantity of material for morphological assessment, but also an adequate sample for analysis of molecular markers to guide patients to appropriate targeted therapies. In this context, cytopathologists worldwide should be familiar with minimum requirements for refereeing cytological samples for testing. The present manuscript is a review with comprehensive description of the content of the workshop entitled Cytological preparations for molecular analysis: pre-analytical issues for EBUS TBNA, presented at the 40th European Congress of Cytopathology in Liverpool, UK. The present review emphasises the advantages and limitations of different types of cytology substrates used for molecular analysis such as archival smears, liquid-based preparations, archival cytospin preparations and FTA (Flinders Technology Associates) cards, as well as their technical requirements/features. These various types of cytological specimens can be successfully used for an extensive array of molecular studies, but the quality and quantity of extracted nucleic acids rely directly on adequate pre-analytical assessment of those samples. In this setting, cytopathologists must not only be familiar with the different types of specimens and associated technical procedures, but also correctly handle the material provided by minimally invasive procedures, ensuring that there is sufficient amount of material for a precise diagnosis and correct management of the patient through personalised care. © 2018 John Wiley & Sons Ltd.
Antioxidant activity evaluation of new dosage forms as vehicles for dehydrated vegetables.
Romero-de Soto, María Dolores; García-Salas, Patricia; Fernández-Arroyo, Salvador; Segura-Carretero, Antonio; Fernández-Campos, Francisco; Clares-Naveros, Beatriz
2013-06-01
A dehydrated vegetables mixture loaded in four pharmaceutical dosage forms as powder, effervescent granulate, sugar granulate and gumdrops were investigated for their antioxidant capacity using 2,2'-azino-bis(3-ethylbenzothiazoline-6-sulphonic acid) radical scavenging capacity assay, oxygen radical absorbance capacity assay and ferric reducing antioxidant potential assay. Total phenolic content of dehydrated vegetables powder mixture was also measured by the Folin-Ciocalteu method, so as to evaluate its contribution to their total antioxidant function. The effect of different temperatures on stability of these systems after 90 days storage was also evaluated. These formulations presented strong antioxidant properties and high phenolic content (279 mg gallic acid equivalent/g of sample) and thus could be potential rich sources of natural antioxidants. Antioxidant properties differed significantly among selected formulations (p < 0.05). Generally, the losses were lower in samples stored under refrigeration. To interpret the antioxidant properties a kinetic approach was performed. Degradation kinetics for the phenolic content and antioxidant capacity followed a zero-order function. Effervescent granulate was the formulation which underwent faster degradation. Contrary, sugar granulate and gumdrops were much more slowly. Time required to halve the initial amount of phenolic compounds was 589 ± 45 days for samples stored at 4 º C, and 312 ± 16 days for samples stored at room temperature. These developed dosage forms are new and innovative approach for vegetable intakes in population with special requirements providing an improvement in the administration of vegetables and fruits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carvalho, M.L.; Amorim, P.; Marques, M.I.M.
1997-04-01
Fucus vesiculosus L. seaweeds from three estuarine stations were analyzed by X-ray fluorescence, providing results for the concentration of total K, Ca, Ti, Mn, Fe, Co, Ni, Cu, Zn, As, Br, Sr, and Pb. Four different structures of the algae (base, stipe, reproductive organs, and growing tips) were analyzed to study the differential accumulation of heavy metals by different parts of Fucus. Some elements (e.g., Cu and Fe) are preferentially accumulated in the base of the algae, whereas others (e.g., As) exhibit higher concentrations in the reproductive organs and growing tips. The pattern of accumulation in different structures is similarmore » for Cu, Zn, and Pb, but for other metals there is considerable variability in accumulation between parts of the plant. This is important in determining which structures of the plant should be used for biomonitoring. For samples collected at stations subject to differing metal loads, the relative elemental composition is approximately constant, notwithstanding significant variation in absolute values. The proportion of metals in Fucus is similar to that found in other estuaries, where metal concentrations are significantly lower. Energy-dispersive X-ray fluorescence has been shown to be a suitable technique for multielement analysis in this type of sample. No chemical pretreatment is required, minimizing sample contamination. The small amount of sample required, and the wide range of elements that can be detected simultaneously make energy-dispersive X-ray fluorescence a valuable tool for pollution studies.« less
Genotyping of Plant and Animal Samples without Prior DNA Purification
Chum, Pak Y.; Haimes, Josh D.; André, Chas P.; Kuusisto, Pia K.; Kelley, Melissa L.
2012-01-01
The Direct PCR approach facilitates PCR amplification directly from small amounts of unpurified samples, and is demonstrated here for several plant and animal tissues (Figure 1). Direct PCR is based on specially engineered Thermo Scientific Phusion and Phire DNA Polymerases, which include a double-stranded DNA binding domain that gives them unique properties such as high tolerance of inhibitors. PCR-based target DNA detection has numerous applications in plant research, including plant genotype analysis and verification of transgenes. PCR from plant tissues traditionally involves an initial DNA isolation step, which may require expensive or toxic reagents. The process is time consuming and increases the risk of cross contamination1, 2. Conversely, by using Thermo Scientific Phire Plant Direct PCR Kit the target DNA can be easily detected, without prior DNA extraction. In the model demonstrated here, an example of derived cleaved amplified polymorphic sequence analysis (dCAPS)3,4 is performed directly from Arabidopsis plant leaves. dCAPS genotyping assays can be used to identify single nucleotide polymorphisms (SNPs) by SNP allele-specific restriction endonuclease digestion3. Some plant samples tend to be more challenging when using Direct PCR methods as they contain components that interfere with PCR, such as phenolic compounds. In these cases, an additional step to remove the compounds is traditionally required2,5. Here, this problem is overcome by using a quick and easy dilution protocol followed by Direct PCR amplification (Figure 1). Fifteen year-old oak leaves are used as a model for challenging plants as the specimen contains high amounts of phenolic compounds including tannins. Gene transfer into mice is broadly used to study the roles of genes in development, physiology and human disease. The use of these animals requires screening for the presence of the transgene, usually with PCR. Traditionally, this involves a time consuming DNA isolation step, during which DNA for PCR analysis is purified from ear, tail or toe tissues6,7. However, with the Thermo Scientific Phire Animal Tissue Direct PCR Kit transgenic mice can be genotyped without prior DNA purification. In this protocol transgenic mouse genotyping is achieved directly from mouse ear tissues, as demonstrated here for a challenging example where only one primer set is used for amplification of two fragments differing greatly in size. PMID:23051689
A Simple Application of Compressed Sensing to Further Accelerate Partially Parallel Imaging
Miao, Jun; Guo, Weihong; Narayan, Sreenath; Wilson, David L.
2012-01-01
Compressed Sensing (CS) and partially parallel imaging (PPI) enable fast MR imaging by reducing the amount of k-space data required for reconstruction. Past attempts to combine these two have been limited by the incoherent sampling requirement of CS, since PPI routines typically sample on a regular (coherent) grid. Here, we developed a new method, “CS+GRAPPA,” to overcome this limitation. We decomposed sets of equidistant samples into multiple random subsets. Then, we reconstructed each subset using CS, and averaging the results to get a final CS k-space reconstruction. We used both a standard CS, and an edge and joint-sparsity guided CS reconstruction. We tested these intermediate results on both synthetic and real MR phantom data, and performed a human observer experiment to determine the effectiveness of decomposition, and to optimize the number of subsets. We then used these CS reconstructions to calibrate the GRAPPA complex coil weights. In vivo parallel MR brain and heart data sets were used. An objective image quality evaluation metric, Case-PDM, was used to quantify image quality. Coherent aliasing and noise artifacts were significantly reduced using two decompositions. More decompositions further reduced coherent aliasing and noise artifacts but introduced blurring. However, the blurring was effectively minimized using our new edge and joint-sparsity guided CS using two decompositions. Numerical results on parallel data demonstrated that the combined method greatly improved image quality as compared to standard GRAPPA, on average halving Case-PDM scores across a range of sampling rates. The proposed technique allowed the same Case-PDM scores as standard GRAPPA, using about half the number of samples. We conclude that the new method augments GRAPPA by combining it with CS, allowing CS to work even when the k-space sampling pattern is equidistant. PMID:22902065
NASA Astrophysics Data System (ADS)
Resano, Martín; Flórez, María del Rosario; Queralt, Ignasi; Marguí, Eva
2015-03-01
This work investigates the potential of high-resolution continuum source graphite furnace atomic absorption spectrometry for the direct determination of Pd, Pt and Rh in two samples of very different nature. While analysis of active pharmaceutical ingredients is straightforward and it is feasible to minimize matrix effects, to the point that calibration can be carried out against aqueous standard solutions, the analysis of used automobile catalysts is more challenging requiring the addition of a chemical modifier (NH4F·HF) to help in releasing the analytes, a more vigorous temperature program and the use of a solid standard (CRM ERM®-EB504) for calibration. However, in both cases it was possible to obtain accurate results and precision values typically better than 10% RSD in a fast and simple way, while only two determinations are needed for the three analytes, since Pt and Rh can be simultaneously monitored in both types of samples. Overall, the methods proposed seem suited for the determination of these analytes in such types of samples, offering a greener and faster alternative that circumvents the traditional problems associated with sample digestion, requiring a small amount of sample only (0.05 mg per replicate for catalysts, and a few milligrams for the pharmaceuticals) and providing sufficient sensitivity to easily comply with regulations. The LODs achieved were 6.5 μg g- 1 (Pd), 8.3 μg g- 1 (Pt) and 9.3 μg g- 1 (Rh) for catalysts, which decreased to 0.08 μg g- 1 (Pd), 0.15 μg g- 1 (Pt) and 0.10 μg g- 1 (Rh) for pharmaceuticals.
NASA Astrophysics Data System (ADS)
Wang, Chia-Wei; Chen, Wen-Tsen; Chang, Huan-Tsung
2014-07-01
Quantification of monosaccharides and disaccharides in five honey samples through surface-assisted laser desorption/ionization mass spectrometry (SALDI-MS) using HgTe nanostructures as the matrix and sucralose as an internal standard has been demonstrated. Under optimal conditions (1× HgTe nanostructure, 0.2 mM ammonium citrate at pH 9.0), the SALDI-MS approach allows detection of fructose and maltose at the concentrations down to 15 and 10 μM, respectively. Without conducting tedious sample pretreatment and separation, the SALDI-MS approach allows determination of the contents of monosaccharides and disaccharides in honey samples within 30 min, with reproducibility (relative standard deviation <15%). Unlike only sodium adducts of standard saccharides detected, sodium adducts and potassium adducts with differential amounts have been found among various samples, showing different amounts of sodium and potassium ions in the honey samples. The SALDI-MS data reveal that the contents of monosaccharides and disaccharides in various honey samples are dependent on their nectar sources. In addition to the abundant amounts of monosaccharides and disaccharides, oligosaccharides in m/z range of 650 - 2700 are only detected in pomelo honey. Having advantages of simplicity, rapidity, and reproducibility, this SALDI-MS holds great potential for the analysis of honey samples.
Wang, Chia-Wei; Chen, Wen-Tsen; Chang, Huan-Tsung
2014-07-01
Quantification of monosaccharides and disaccharides in five honey samples through surface-assisted laser desorption/ionization mass spectrometry (SALDI-MS) using HgTe nanostructures as the matrix and sucralose as an internal standard has been demonstrated. Under optimal conditions (1× HgTe nanostructure, 0.2 mM ammonium citrate at pH 9.0), the SALDI-MS approach allows detection of fructose and maltose at the concentrations down to 15 and 10 μM, respectively. Without conducting tedious sample pretreatment and separation, the SALDI-MS approach allows determination of the contents of monosaccharides and disaccharides in honey samples within 30 min, with reproducibility (relative standard deviation <15%). Unlike only sodium adducts of standard saccharides detected, sodium adducts and potassium adducts with differential amounts have been found among various samples, showing different amounts of sodium and potassium ions in the honey samples. The SALDI-MS data reveal that the contents of monosaccharides and disaccharides in various honey samples are dependent on their nectar sources. In addition to the abundant amounts of monosaccharides and disaccharides, oligosaccharides in m/z range of 650 - 2700 are only detected in pomelo honey. Having advantages of simplicity, rapidity, and reproducibility, this SALDI-MS holds great potential for the analysis of honey samples.
Microwave-assisted synthesis of carbon nanotubes from tannin, lignin, and derivatives
Viswanathan, Tito
2014-06-17
A method of synthesizing carbon nanotubes. In one embodiment, the method includes the steps of: (a) dissolving a first amount of a first transition-metal salt and a second amount of a second transition-metal salt in water to form a solution; (b) adding a third amount of tannin to the solution to form a mixture; (c) heating the mixture to a first temperature for a first duration of time to form a sample; and (d) subjecting the sample to a microwave radiation for a second duration of time effective to produce a plurality of carbon nanotubes.
Method and apparatus for sampling atmospheric mercury
Trujillo, Patricio E.; Campbell, Evan E.; Eutsler, Bernard C.
1976-01-20
A method of simultaneously sampling particulate mercury, organic mercurial vapors, and metallic mercury vapor in the working and occupational environment and determining the amount of mercury derived from each such source in the sampled air. A known volume of air is passed through a sampling tube containing a filter for particulate mercury collection, a first adsorber for the selective adsorption of organic mercurial vapors, and a second adsorber for the adsorption of metallic mercury vapor. Carbon black molecular sieves are particularly useful as the selective adsorber for organic mercurial vapors. The amount of mercury adsorbed or collected in each section of the sampling tube is readily quantitatively determined by flameless atomic absorption spectrophotometry.
Approaches to advancescientific understanding of macrosystems ecology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levy, Ofir; Ball, Becky; Bond-Lamberty, Benjamin
Macrosystem ecological studies inherently investigate processes that interact across multiple spatial and temporal scales, requiring intensive sampling and massive amounts of data from diverse sources to incorporate complex cross-scale and hierarchical interactions. Inherent challenges associated with these characteristics include high computational demands, data standardization and assimilation, identification of important processes and scales without prior knowledge, and the need for large, cross-disciplinary research teams that conduct long-term studies. Therefore, macrosystem ecology studies must utilize a unique set of approaches that are capable of encompassing these methodological characteristics and associated challenges. Several case studies demonstrate innovative methods used in current macrosystem ecologymore » studies.« less
A new method for evaluating the dissolution of orodispersible films.
Xia, Yiran; Chen, Fang; Zhang, Huiping; Luo, Chunlin
2015-05-01
The aim of this research was to develop and assess a new dissolution apparatus for orodispersible films (ODFs). The new apparatus was based on a flow-through cell design which requires only a limited amount of dissolution medium and can automatically collect samples in short-time intervals. Compared with the dissolution method in Chinese Pharmacopeia, our method simulated the flow condition of the oral cavity and resulted in reproducible dissolution data and remarkably discriminating capability. Therefore, we concluded that the proposed dissolution method was particularly suitable for evaluating the dissolution of ODFs and should also be applicable to other fast-dissolving solid dosage forms.
Hamli, Hadi; Idris, Mohd Hanafi; Rajaee, Amy Halimah; Kamal, Abu Hena Mustafa
2015-12-01
A study of the reproductive cycle of the hard clam, Meretrix lyrata, was documented based on histological observation and Gonad Index (GI). Samples were taken from estuarine waters of the Buntal River in Sarawak, Malaysia. The gonad of M. lyrata started to develop in September 2013. Gametogenesis continued to develop until the maturation and spawning stage from February to April 2014. The GI pattern for a one-year cycle showed a significant correlation with chlorophyll a. The corresponding GI with chlorophyll a suggested that the development of the reproductive cycle of M. lyrata required a high amount of food to increase gametogenesis.
Hamli, Hadi; Idris, Mohd Hanafi; Rajaee, Amy Halimah; Kamal, Abu Hena Mustafa
2015-01-01
A study of the reproductive cycle of the hard clam, Meretrix lyrata, was documented based on histological observation and Gonad Index (GI). Samples were taken from estuarine waters of the Buntal River in Sarawak, Malaysia. The gonad of M. lyrata started to develop in September 2013. Gametogenesis continued to develop until the maturation and spawning stage from February to April 2014. The GI pattern for a one-year cycle showed a significant correlation with chlorophyll a. The corresponding GI with chlorophyll a suggested that the development of the reproductive cycle of M. lyrata required a high amount of food to increase gametogenesis. PMID:26868710
Validation Test Report for the Automated Optical Processing System (AOPS) Version 4.8
2013-06-28
be familiar with UNIX; BASH shell programming; and remote sensing, particularly regarding computer processing of satellite data. The system memory ...and storage requirements are difficult to gauge. The amount of memory needed is dependent upon the amount and type of satellite data you wish to...process; the larger the area, the larger the memory requirement. For example, the entire Atlantic Ocean will require more processing power than the
29 CFR 453.14 - The meaning of “funds.”
Code of Federal Regulations, 2010 CFR
2010-07-01
... STANDARDS GENERAL STATEMENT CONCERNING THE BONDING REQUIREMENTS OF THE LABOR-MANAGEMENT REPORTING AND DISCLOSURE ACT OF 1959 Amount of Bonds § 453.14 The meaning of “funds.” While the protection of bonds... preceeding fiscal year would be in amounts sufficient to meet the statutory requirement. Of course, in...
40 CFR 60.2735 - Is there a minimum amount of monitoring data I must obtain?
Code of Federal Regulations, 2012 CFR
2012-07-01
... monitoring malfunctions, associated repairs, and required quality assurance or quality control activities for... periods, or required monitoring system quality assurance or control activities in calculations used to... 40 Protection of Environment 7 2012-07-01 2012-07-01 false Is there a minimum amount of monitoring...
40 CFR 60.2735 - Is there a minimum amount of monitoring data I must obtain?
Code of Federal Regulations, 2011 CFR
2011-07-01
....2770(o) of this part), and required monitoring system quality assurance or quality control activities... periods, and required monitoring system quality assurance or quality control activities including, as... 40 Protection of Environment 6 2011-07-01 2011-07-01 false Is there a minimum amount of monitoring...
Code of Federal Regulations, 2011 CFR
2011-04-01
... equal to the annual premium for flood insurance required by § 2700.101(a)(4) (the lender shall pay the homeowner's flood insurance premium for that year to the extent it collects such an amount); and (4) An amount equal to the annual mortgage insurance premium required under § 2700.315. (b) Subsequent to the...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-12
.... Section 105 of AREERA amended the Smith-Lever Act to require that a specified amount of agricultural... Hatch Act and Smith-Lever Act to require that a specified amount of agricultural research and extension... Smith- Lever Act funds on multistate extension activities and 25 percent on integrated research and...
Energy Storage Requirements for Achieving 50% Penetration of Solar Photovoltaic Energy in California
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denholm, Paul; Margolis, Robert
2016-09-01
We estimate the storage required to enable PV penetration up to 50% in California (with renewable penetration over 66%), and we quantify the complex relationships among storage, PV penetration, grid flexibility, and PV costs due to increased curtailment. We find that the storage needed depends strongly on the amount of other flexibility resources deployed. With very low-cost PV (three cents per kilowatt-hour) and a highly flexible electric power system, about 19 gigawatts of energy storage could enable 50% PV penetration with a marginal net PV levelized cost of energy (LCOE) comparable to the variable costs of future combined-cycle gas generatorsmore » under carbon constraints. This system requires extensive use of flexible generation, transmission, demand response, and electrifying one quarter of the vehicle fleet in California with largely optimized charging. A less flexible system, or more expensive PV would require significantly greater amounts of storage. The amount of storage needed to support very large amounts of PV might fit within a least-cost framework driven by declining storage costs and reduced storage-duration needs due to high PV penetration.« less
Energy Storage Requirements for Achieving 50% Solar Photovoltaic Energy Penetration in California
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denholm, Paul; Margolis, Robert
2016-08-01
We estimate the storage required to enable PV penetration up to 50% in California (with renewable penetration over 66%), and we quantify the complex relationships among storage, PV penetration, grid flexibility, and PV costs due to increased curtailment. We find that the storage needed depends strongly on the amount of other flexibility resources deployed. With very low-cost PV (three cents per kilowatt-hour) and a highly flexible electric power system, about 19 gigawatts of energy storage could enable 50% PV penetration with a marginal net PV levelized cost of energy (LCOE) comparable to the variable costs of future combined-cycle gas generatorsmore » under carbon constraints. This system requires extensive use of flexible generation, transmission, demand response, and electrifying one quarter of the vehicle fleet in California with largely optimized charging. A less flexible system, or more expensive PV would require significantly greater amounts of storage. The amount of storage needed to support very large amounts of PV might fit within a least-cost framework driven by declining storage costs and reduced storage-duration needs due to high PV penetration.« less
Towards a routine application of Top-Down approaches for label-free discovery workflows.
Schmit, Pierre-Olivier; Vialaret, Jerome; Wessels, Hans J C T; van Gool, Alain J; Lehmann, Sylvain; Gabelle, Audrey; Wood, Jason; Bern, Marshall; Paape, Rainer; Suckau, Detlev; Kruppa, Gary; Hirtz, Christophe
2018-03-20
Thanks to proteomics investigations, our vision of the role of different protein isoforms in the pathophysiology of diseases has largely evolved. The idea that protein biomarkers like tau, amyloid peptides, ApoE, cystatin, or neurogranin are represented in body fluids as single species is obviously over-simplified, as most proteins are present in different isoforms and subjected to numerous processing and post-translational modifications. Measuring the intact mass of proteins by MS has the advantage to provide information on the presence and relative amount of the different proteoforms. Such Top-Down approaches typically require a high degree of sample pre-fractionation to allow the MS system to deliver optimal performance in terms of dynamic range, mass accuracy and resolution. In clinical studies, however, the requirements for pre-analytical robustness and sample size large enough for statistical power restrict the routine use of a high degree of sample pre-fractionation. In this study, we have investigated the capacities of current-generation Ultra-High Resolution Q-Tof systems to deal with high complexity intact protein samples and have evaluated the approach on a cohort of patients suffering from neurodegenerative disease. Statistical analysis has shown that several proteoforms can be used to distinguish Alzheimer disease patients from patients suffering from other neurodegenerative disease. Top-down approaches have an extremely high biological relevance, especially when it comes to biomarker discovery, but the necessary pre-fractionation constraints are not easily compatible with the robustness requirements and the size of clinical sample cohorts. We have demonstrated that intact protein profiling studies could be run on UHR-Q-ToF with limited pre-fractionation. The proteoforms that have been identified as candidate biomarkers in the-proof-of concept study are derived from proteins known to play a role in the pathophysiology process of Alzheimer disease. Copyright © 2017 Elsevier B.V. All rights reserved.
Investigation of Effect Additive Phase Change Materials on the Thermal Conductivity
NASA Astrophysics Data System (ADS)
Nakielska, Magdalena; Chalamoński, Mariusz; Pawłowski, Krzysztof
2017-10-01
The aim of worldwide policy is to reduce the amount of consumed energy and conventional fuels. An important branch of the economy that affects the energy balance of the country is construction industry. In Poland, since January 1st, 2017 new limit values have been valid regarding energy saving and thermal insulation of buildings. To meet the requirements of more and more stringent technical and environmental standards, new technological solutions are currently being looked for. When it comes to the use of new materials, phase-change materials are being widely introduced into construction industry. Thanks to phase-change materials, we can increase the amount of heat storage. Great thermal inertia of the building provides more stable conditions inside the rooms and allows the use of unconventional sources of energy such as solar energy. A way to reduce the energy consumption of the object is the use of modern solutions for ventilation systems. An example is the solar chimney, which supports natural ventilation in order to improve internal comfort of the rooms. Numerous studies are being carried out in order to determine the optimal construction of solar chimneys in terms of materials and construction parameters. One of the elements of solar chimneys is an absorption plate, which affects the amount of accumulated heat in the construction. In order to carry out the research on the thermal capacity of the absorption plate, the first research work has been already planned. The work presents the research results of a heat-transfer coefficient of the absorption plates samples made of cement, aggregate, water, and phase-change material in different volume percentage. The work also presents methodology and the research process of phase-change material samples.
Hassan, Siba E; Hajeer, Mohammad Y; Alali, Osama H; Kaddah, Ayham S
2016-06-01
The results of previous studies about the efficacy of using self-ligating brackets (SLBs) in controlling canine movement during retraction are not in harmony. Therefore, the current study aimed to compare the effects of using new passive SLBs on maxillary canine retraction with sliding mechanics vs conventional ligating brackets (CLBs) tied with metal ligatures. The sample comprised 15 adult patients (4 males, 11 females; 18-24 years) requiring bilateral extraction of maxillary first premolars. Units of randomization are the left or right maxillary canines within the same patient. The two maxillary canines in each patient were randomly assigned to one of the two groups in a simple split-mouth design. The canines in the SLBs group (n = 15) were bracketed with SLBs (Damon Q™), while the canines in the CLBs group (n = 15) were bracketed with conventional brackets (Mini Master Series). Transpalatal bars were used for anchorage. After leveling and alignment, 0.019 × 0.025" stainless steel working archwires were placed. Canines were retracted using a nickel-titanium close-coil springs with a 150 gm force. The amount and rate of maxillary canine retraction, canine rotation, and loss of anchorage were measured on study models collected at the beginning of canine retraction (T0) and 12 weeks later (T1). Differences were analyzed using paired-samples t-tests. The effect differences were statistically significant (p < 0.001). Using Damon Q™ SLBs, the amount and rate of canine retraction were greater, while canine rotation and anchorage loss were less. From a clinical perspective, extraction space closure can be accomplished more effectively using SLBs. Self-ligating brackets gave better results compared to the CLBs in terms of rate of movement, amount of canine rotation following extraction, and anchorage loss.
NASA Astrophysics Data System (ADS)
Liu, Chang; Wu, Xing; Mao, Jianlin; Liu, Xiaoqin
2017-07-01
In the signal processing domain, there has been growing interest in using acoustic emission (AE) signals for the fault diagnosis and condition assessment instead of vibration signals, which has been advocated as an effective technique for identifying fracture, crack or damage. The AE signal has high frequencies up to several MHz which can avoid some signals interference, such as the parts of bearing (i.e. rolling elements, ring and so on) and other rotating parts of machine. However, acoustic emission signal necessitates advanced signal sampling capabilities and requests ability to deal with large amounts of sampling data. In this paper, compressive sensing (CS) is introduced as a processing framework, and then a compressive features extraction method is proposed. We use it for extracting the compressive features from compressively-sensed data directly, and also prove the energy preservation properties. First, we study the AE signals under the CS framework. The sparsity of AE signal of the rolling bearing is checked. The observation and reconstruction of signal is also studied. Second, we present a method of extraction AE compressive feature (AECF) from compressively-sensed data directly. We demonstrate the energy preservation properties and the processing of the extracted AECF feature. We assess the running state of the bearing using the AECF trend. The AECF trend of the running state of rolling bearings is consistent with the trend of traditional features. Thus, the method is an effective way to evaluate the running trend of rolling bearings. The results of the experiments have verified that the signal processing and the condition assessment based on AECF is simpler, the amount of data required is smaller, and the amount of computation is greatly reduced.
9 CFR 381.412 - Reference amounts customarily consumed per eating occasion.
Code of Federal Regulations, 2013 CFR
2013-01-01
... appropriate national food consumption surveys. (2) The Reference Amounts are calculated for an infant or child... are based on data set forth in appropriate national food consumption surveys. Such Reference Amounts... child under 4 years of age. (3) An appropriate national food consumption survey includes a large sample...
9 CFR 381.412 - Reference amounts customarily consumed per eating occasion.
Code of Federal Regulations, 2012 CFR
2012-01-01
... appropriate national food consumption surveys. (2) The Reference Amounts are calculated for an infant or child... are based on data set forth in appropriate national food consumption surveys. Such Reference Amounts... child under 4 years of age. (3) An appropriate national food consumption survey includes a large sample...
9 CFR 317.312 - Reference amounts customarily consumed per eating occasion.
Code of Federal Regulations, 2013 CFR
2013-01-01
... appropriate national food consumption surveys. (2) The Reference Amounts are calculated for an infant or child... are based on data set forth in appropriate national food consumption surveys. Such Reference Amounts... child under 4 years of age. (3) An appropriate national food consumption survey includes a large sample...
9 CFR 317.312 - Reference amounts customarily consumed per eating occasion.
Code of Federal Regulations, 2014 CFR
2014-01-01
... appropriate national food consumption surveys. (2) The Reference Amounts are calculated for an infant or child... are based on data set forth in appropriate national food consumption surveys. Such Reference Amounts... child under 4 years of age. (3) An appropriate national food consumption survey includes a large sample...
9 CFR 381.412 - Reference amounts customarily consumed per eating occasion.
Code of Federal Regulations, 2014 CFR
2014-01-01
... appropriate national food consumption surveys. (2) The Reference Amounts are calculated for an infant or child... are based on data set forth in appropriate national food consumption surveys. Such Reference Amounts... child under 4 years of age. (3) An appropriate national food consumption survey includes a large sample...
9 CFR 317.312 - Reference amounts customarily consumed per eating occasion.
Code of Federal Regulations, 2012 CFR
2012-01-01
... appropriate national food consumption surveys. (2) The Reference Amounts are calculated for an infant or child... are based on data set forth in appropriate national food consumption surveys. Such Reference Amounts... child under 4 years of age. (3) An appropriate national food consumption survey includes a large sample...
NASA Astrophysics Data System (ADS)
ten Veldhuis, Marie-Claire; Schleiss, Marc
2017-04-01
Urban catchments are typically characterised by a more flashy nature of the hydrological response compared to natural catchments. Predicting flow changes associated with urbanisation is not straightforward, as they are influenced by interactions between impervious cover, basin size, drainage connectivity and stormwater management infrastructure. In this study, we present an alternative approach to statistical analysis of hydrological response variability and basin flashiness, based on the distribution of inter-amount times. We analyse inter-amount time distributions of high-resolution streamflow time series for 17 (semi-)urbanised basins in North Carolina, USA, ranging from 13 to 238 km2 in size. We show that in the inter-amount-time framework, sampling frequency is tuned to the local variability of the flow pattern, resulting in a different representation and weighting of high and low flow periods in the statistical distribution. This leads to important differences in the way the distribution quantiles, mean, coefficient of variation and skewness vary across scales and results in lower mean intermittency and improved scaling. Moreover, we show that inter-amount-time distributions can be used to detect regulation effects on flow patterns, identify critical sampling scales and characterise flashiness of hydrological response. The possibility to use both the classical approach and the inter-amount-time framework to identify minimum observable scales and analyse flow data opens up interesting areas for future research.
Spicher, G; Peters, J
1991-05-01
The experiments were performed using frosted glass as carrier with its surface being contaminated with whole blood containing Staphylococcus aureus as test organism. At the time of sampling, a heparin preparation was added to the blood to prevent premature coagulation. After addition of the staphylococci, coagulation was initiated by means of a heparin antagonist. 10, 25, 50, 100, and 150 microliters, respectively, of the blood were homogeneously spread on rectangular test areas of 10 x 20 mm. After the blood had coagulated, each of the test objects was placed in 15 ml of the solution (20 degrees C) containing the active ingredient tested for 60 min. After that, the test objects were removed from the disinfectant and, in order to inactivate any adhering active components, treated with a neutralizing solution of suitable composition. The number of viable germs (colony-forming units) was determined quantitatively. The blood samples were ground together with quartz sand. Aliquots of the diluted suspensions were mixed with molten agar medium. The plates then were incubated at 37 degrees C over a period of 14 days. The relative number of viable germs (N/No) per test object was calculated from the number of colonies. Plotting of the microbicidal effects obtained (log N/No] versus the concentration of the active substance (see Figs. 1-3) yielded curves differing in some characteristics as e.g. curvature, slope of the lower curve section (log N/No). less than -3), concentration range according to the layer thickness of the contamination. To visualize the reduction of the efficacy of the respective disinfectants caused by blood, the concentrations of active components were determined which are necessary to achieve a microbicidal effect of log (N/No) = -4. These concentrations were plotted versus the amounts of blood per test area (Fig. 4). The resulting curve for formaldehyde was slightly U-shaped. With a raising amount of blood, the concentration required slightly decreased in the beginning and increased again from an amount of ca. 100 microliter blood per test area. For all other active substances, the required concentration of these substances increased with the amount of blood used. The curve obtained for ethanol exhibited the lowest slope. The slope of the curves increased in the following order: ethanol, m-cresol, peracetic acid, chloramine T, glutardialdehyde, benzyldimethyldodecylammoniumbromide. The curves for chloramine T and glutardialdehyde nearly paralleled each other.(ABSTRACT TRUNCATED AT 400 WORDS)
Dredging Research Program. Dredge Mooring Study, Recommended Design, Phase 2 Report
1992-05-01
describes the amount of dock space and staging area required (250 ft by 300 ft of dock space), crane requirements (a 50- to 60-ton crane ), and time and...including a diver) in 1 week or less (5 days minimum). With the addition of a second crane and second anchor handling vessel, the assembly and installation...describes the amount of dock space and staging area required (250 ft by 300 ft of dock space), crane requirements (a 50- to 60-ton crane ), and time and
45 CFR 158.607 - Factors HHS uses to determine the amount of penalty.
Code of Federal Regulations, 2012 CFR
2012-10-01
... level of financial and other impacts on affected individuals. (3) Other factors as justice may require. ... 45 Public Welfare 1 2012-10-01 2012-10-01 false Factors HHS uses to determine the amount of... Civil Penalties § 158.607 Factors HHS uses to determine the amount of penalty. In determining the amount...
45 CFR 158.607 - Factors HHS uses to determine the amount of penalty.
Code of Federal Regulations, 2014 CFR
2014-10-01
... level of financial and other impacts on affected individuals. (3) Other factors as justice may require. ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Factors HHS uses to determine the amount of... Civil Penalties § 158.607 Factors HHS uses to determine the amount of penalty. In determining the amount...
45 CFR 158.607 - Factors HHS uses to determine the amount of penalty.
Code of Federal Regulations, 2013 CFR
2013-10-01
... level of financial and other impacts on affected individuals. (3) Other factors as justice may require. ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Factors HHS uses to determine the amount of... Civil Penalties § 158.607 Factors HHS uses to determine the amount of penalty. In determining the amount...
Galli, Joakim; Oelrich, Johan; Taussig, Michael J.; Andreasson, Ulrika; Ortega-Paino, Eva; Landegren, Ulf
2015-01-01
We report the development of a new database of technology services and products for analysis of biobank samples in biomedical research. BARCdb, the Biobanking Analysis Resource Catalogue (http://www.barcdb.org), is a freely available web resource, listing expertise and molecular resource capabilities of research centres and biotechnology companies. The database is designed for researchers who require information on how to make best use of valuable biospecimens from biobanks and other sample collections, focusing on the choice of analytical techniques and the demands they make on the type of samples, pre-analytical sample preparation and amounts needed. BARCdb has been developed as part of the Swedish biobanking infrastructure (BBMRI.se), but now welcomes submissions from service providers throughout Europe. BARCdb can help match resource providers with potential users, stimulating transnational collaborations and ensuring compatibility of results from different labs. It can promote a more optimal use of European resources in general, both with respect to standard and more experimental technologies, as well as for valuable biobank samples. This article describes how information on service and reagent providers of relevant technologies is made available on BARCdb, and how this resource may contribute to strengthening biomedical research in academia and in the biotechnology and pharmaceutical industries. PMID:25336620
Methods of measuring radioactivity in the environment
NASA Astrophysics Data System (ADS)
Isaksson, Mats
In this thesis a variety of sampling methods have been utilised to assess the amount of deposited activity, mainly of 137Cs, from the Chernobyl accident and from the nuclear weapons tests. Starting with the Chernobyl accident in 1986 sampling of air and rain was used to determine the composition and amount of radioactive debris from this accident, brought to southern Sweden by the weather systems. The resulting deposition and its removal from urban areas was than studied through measurements on sewage sludge and water. The main part of the thesis considers methods of determining the amount of radiocaesium in the ground through soil sampling. In connection with soil sampling a method of optimising the sampling procedure has been developed and tested in the areas of Sweden which have a comparatively high amount of 137Cs from the Chernobyl accident. This method was then used in a survey of the activity in soil in Lund and Skane, divided between nuclear weapons fallout and fallout from the Chernobyl accident. By comparing the results from this survey with deposition calculated from precipitation measurements it was found possible to predict the deposition pattern over Skane for both nuclear weapons fallout and fallout from the Chernobyl accident. In addition, the vertical distribution of 137Cs has been modelled and the temporal variation of the depth distribution has been described.
Penetrator role in Mars sample strategy
NASA Technical Reports Server (NTRS)
Boynton, William; Dwornik, Steve; Eckstrom, William; Roalstad, David A.
1988-01-01
The application of the penetrator to a Mars Return Sample Mission (MRSM) has direct advantages to meet science objectives and mission safety. Based on engineering data and work currently conducted at Ball Aerospace Systems Division, the concept of penetrators as scientific instruments is entirely practical. The primary utilization of a penetrator for MRSM would be to optimize the selection of the sample site location and to help in selection of the actual sample to be returned to Earth. It is recognized that the amount of sample to be returned is very limited, therefore the selection of the sample site is critical to the success of the mission. The following mission scenario is proposed. The site selection of a sample to be acquired will be performed by science working groups. A decision will be reached and a set of target priorities established based on data to give geochemical, geophysical and geological information. The first task of a penetrator will be to collect data at up to 4 to 6 possible landing sites. The penetrator can include geophysical, geochemical, geological and engineering instruments to confirm that scientific data requirements at that site will be met. This in situ near real-time data, collected prior to final targeting of the lander, will insure that the sample site is both scientifically valuable and also that it is reachable within limits of the capability of the lander.
A multiplexed system for quantitative comparisons of chromatin landscapes
van Galen, Peter; Viny, Aaron D.; Ram, Oren; Ryan, Russell J.H.; Cotton, Matthew J.; Donohue, Laura; Sievers, Cem; Drier, Yotam; Liau, Brian B.; Gillespie, Shawn M.; Carroll, Kaitlin M.; Cross, Michael B.; Levine, Ross L.; Bernstein, Bradley E.
2015-01-01
Genome-wide profiling of histone modifications can provide systematic insight into the regulatory elements and programs engaged in a given cell type. However, conventional chromatin immunoprecipitation and sequencing (ChIP-seq) does not capture quantitative information on histone modification levels, requires large amounts of starting material, and involves tedious processing of each individual sample. Here we address these limitations with a technology that leverages DNA barcoding to profile chromatin quantitatively and in multiplexed format. We concurrently map relative levels of multiple histone modifications across multiple samples, each comprising as few as a thousand cells. We demonstrate the technology by monitoring dynamic changes following inhibition of P300, EZH2 or KDM5, by linking altered epigenetic landscapes to chromatin regulator mutations, and by mapping active and repressive marks in purified human hematopoietic stem cells. Hence, this technology enables quantitative studies of chromatin state dynamics across rare cell types, genotypes, environmental conditions and drug treatments. PMID:26687680
Dron, Julien; Linke, Robert; Rosenberg, Erwin; Schreiner, Manfred
2004-08-20
A procedure for the determination of fatty acids (FA) and glycerol in oils has been developed. The method includes a derivatization step of the FAs into their methyl esters or a transesterification of the triacylglycerols with trimethylsulfonium hydroxide (TMSH), respectively. The analysis is carried out by gas chromatography with parallel flame ionization and mass spectrometric detection. The parameters involved in the transesterification reaction were optimized. Only the stoichiometric ratio of TMSH:total FA amount showed a significant influence on the reaction yield. Relative standard deviations for 10 replicates were below 3% for all FAs studied and their linearity range was 0.5-50 mmol/L, when using heptadecanoic acid as an internal standard. The final procedure was rapid and required little sample handling. It was then tested on fresh oil samples and presented satisfying results, in agreement with previous works.
Space-Time Data fusion for Remote Sensing Applications
NASA Technical Reports Server (NTRS)
Braverman, Amy; Nguyen, H.; Cressie, N.
2011-01-01
NASA has been collecting massive amounts of remote sensing data about Earth's systems for more than a decade. Missions are selected to be complementary in quantities measured, retrieval techniques, and sampling characteristics, so these datasets are highly synergistic. To fully exploit this, a rigorous methodology for combining data with heterogeneous sampling characteristics is required. For scientific purposes, the methodology must also provide quantitative measures of uncertainty that propagate input-data uncertainty appropriately. We view this as a statistical inference problem. The true but notdirectly- observed quantities form a vector-valued field continuous in space and time. Our goal is to infer those true values or some function of them, and provide to uncertainty quantification for those inferences. We use a spatiotemporal statistical model that relates the unobserved quantities of interest at point-level to the spatially aggregated, observed data. We describe and illustrate our method using CO2 data from two NASA data sets.
Simons, Kelsey V
2006-11-01
This research sought to identify organizational characteristics associated with the amount of professional qualifications among a nationally representative sample of nursing home social service directors. A self-administered survey was sent to directors in 675 facilities randomly sampled from a federal database, excluding facilities with fewer than 120 beds that are not required to staff a full-time social worker. The response rate was 45 percent (N = 299). Univariate results showed that most respondents possessed a social work degree, most lacked licensure, and few were clinically supervised. A multiple regression analysis found that nonprofit, independently owned facilities in rural areas staffed social service directors who were significantly more qualified than directors in for-profit, chain-affiliated facilities in urban and suburban areas. Facilities with fewer psychosocial deficiencies and higher occupancy rates employed social service directors with greater qualifications. The implications of these findings for social work education, practice, policy, and research are discussed.
NASA Astrophysics Data System (ADS)
Loveley, Matthew R.; Marcantonio, Franco; Lyle, Mitchell; Ibrahim, Rami; Hertzberg, Jennifer E.; Schmidt, Matthew W.
2017-12-01
Here, we examine how redistribution of differing grain sizes by sediment focusing processes in Panama Basin sediments affects the use of 230Th as a constant-flux proxy. We study representative sediments of Holocene and Last Glacial Maximum (LGM) time slices from four sediment cores from two different localities close to the ridges that bound the Panama Basin. Each locality contains paired sites that are seismically interpreted to have undergone extremes in sediment redistribution, i.e., focused versus winnowed sites. Both Holocene and LGM samples from sites where winnowing has occurred contain significant amounts (up to 50%) of the 230Th within the >63 μm grain size fraction, which makes up 40-70% of the bulk sediment analyzed. For sites where focusing has occurred, Holocene and LGM samples contain the greatest amounts of 230Th (up to 49%) in the finest grain-sized fraction (<4 μm), which makes up 26-40% of the bulk sediment analyzed. There are slight underestimations of 230Th-derived mass accumulation rates (MARs) and overestimations of 230Th-derived focusing factors at focused sites, while the opposite is true for winnowed sites. Corrections made using a model by Kretschmer et al. (2010) suggest a maximum change of about 30% in 230Th-derived MARs and focusing factors at focused sites, except for our most focused site which requires an approximate 70% correction in one sample. Our 230Th-corrected 232Th flux results suggest that the boundary between hemipelagically- and pelagically-derived sediments falls between 350 and 600 km from the continental margin.
Jann, Johann-Christoph; Nowak, Daniel; Nolte, Florian; Fey, Stephanie; Nowak, Verena; Obländer, Julia; Pressler, Jovita; Palme, Iris; Xanthopoulos, Christina; Fabarius, Alice; Platzbecker, Uwe; Giagounidis, Aristoteles; Götze, Katharina; Letsch, Anne; Haase, Detlef; Schlenk, Richard; Bug, Gesine; Lübbert, Michael; Ganser, Arnold; Germing, Ulrich; Haferlach, Claudia; Hofmann, Wolf-Karsten; Mossner, Maximilian
2017-01-01
Background Cytogenetic aberrations such as deletion of chromosome 5q (del(5q)) represent key elements in routine clinical diagnostics of haematological malignancies. Currently established methods such as metaphase cytogenetics, FISH or array-based approaches have limitations due to their dependency on viable cells, high costs or semi-quantitative nature. Importantly, they cannot be used on low abundance DNA. We therefore aimed to establish a robust and quantitative technique that overcomes these shortcomings. Methods For precise determination of del(5q) cell fractions, we developed an inexpensive multiplex-PCR assay requiring only nanograms of DNA that simultaneously measures allelic imbalances of 12 independent short tandem repeat markers. Results Application of this method to n=1142 samples from n=260 individuals revealed strong intermarker concordance (R²=0.77–0.97) and reproducibility (mean SD: 1.7%). Notably, the assay showed accurate quantification via standard curve assessment (R²>0.99) and high concordance with paired FISH measurements (R²=0.92) even with subnanogram amounts of DNA. Moreover, cytogenetic response was reliably confirmed in del(5q) patients with myelodysplastic syndromes treated with lenalidomide. While the assay demonstrated good diagnostic accuracy in receiver operating characteristic analysis (area under the curve: 0.97), we further observed robust correlation between bone marrow and peripheral blood samples (R²=0.79), suggesting its potential suitability for less-invasive clonal monitoring. Conclusions In conclusion, we present an adaptable tool for quantification of chromosomal aberrations, particularly in problematic samples, which should be easily applicable to further tumour entities. PMID:28600436
Gold nanoparticle-enabled blood test for early stage cancer detection and risk assessment.
Zheng, Tianyu; Pierre-Pierre, Nickisha; Yan, Xin; Huo, Qun; Almodovar, Alvin J O; Valerio, Felipe; Rivera-Ramirez, Inoel; Griffith, Elizabeth; Decker, David D; Chen, Sixue; Zhu, Ning
2015-04-01
When citrate ligands-capped gold nanoparticles are mixed with blood sera, a protein corona is formed on the nanoparticle surface due to the adsorption of various proteins in the blood to the nanoparticles. Using a two-step gold nanoparticle-enabled dynamic light scattering assay, we discovered that the amount of human immunoglobulin G (IgG) in the gold nanoparticle protein corona is increased in prostate cancer patients compared to noncancer controls. Two pilot studies conducted on blood serum samples collected at Florida Hospital and obtained from Prostate Cancer Biorespository Network (PCBN) revealed that the test has a 90-95% specificity and 50% sensitivity in detecting early stage prostate cancer, representing a significant improvement over the current PSA test. The increased amount of human IgG found in the protein corona is believed to be associated with the autoantibodies produced in cancer patients as part of the immunodefense against tumor. Proteomic analysis of the nanoparticle protein corona revealed molecular profile differences between cancer and noncancer serum samples. Autoantibodies and natural antibodies produced in cancer patients in response to tumorigenesis have been found and detected in the blood of many cancer types. The test may be applicable for early detection and risk assessment of a broad spectrum of cancer. This new blood test is simple, low cost, requires only a few drops of blood sample, and the results are obtained within minutes. The test is well suited for screening purpose. More extensive studies are being conducted to further evaluate and validate the clinical potential of the new test.
Gaupels, Frank; Knauer, Torsten; van Bel, Aart J E
2008-01-01
This study investigated advantages and drawbacks of two sieve-tube sap sampling methods for comparison of phloem proteins in powdery mildew-infested vs. non-infested Hordeum vulgare plants. In one approach, sieve tube sap was collected by stylectomy. Aphid stylets were cut and immediately covered with silicon oil to prevent any contamination or modification of exudates. In this way, a maximum of 1muL pure phloem sap could be obtained per hour. Interestingly, after pathogen infection exudation from microcauterized stylets was reduced to less than 40% of control plants, suggesting that powdery mildew induced sieve tube-occlusion mechanisms. In contrast to the laborious stylectomy, facilitated exudation using EDTA to prevent calcium-mediated callose formation is quick and easy with a large volume yield. After two-dimensional (2D) electrophoresis, a digital overlay of the protein sets extracted from EDTA solutions and stylet exudates showed that some major spots were the same with both sampling techniques. However, EDTA exudates also contained large amounts of contaminative proteins of unknown origin. A combinatory approach may be most favourable for studies in which the protein composition of phloem sap is compared between control and pathogen-infected plants. Facilitated exudation may be applied for subtractive identification of differentially expressed proteins by 2D/mass spectrometry, which requires large amounts of protein. A reference gel loaded with pure phloem sap from stylectomy may be useful for confirmation of phloem origin of candidate spots by digital overlay. The method provides a novel opportunity to study differential expression of phloem proteins in monocotyledonous plant species.
Rohrman, Brittany; Richards-Kortum, Rebecca
2015-02-03
Recombinase polymerase amplification (RPA) may be used to detect a variety of pathogens, often after minimal sample preparation. However, previous work has shown that whole blood inhibits RPA. In this paper, we show that the concentrations of background DNA found in whole blood prevent the amplification of target DNA by RPA. First, using an HIV-1 RPA assay with known concentrations of nonspecific background DNA, we show that RPA tolerates more background DNA when higher HIV-1 target concentrations are present. Then, using three additional assays, we demonstrate that the maximum amount of background DNA that may be tolerated in RPA reactions depends on the DNA sequences used in the assay. We also show that changing the RPA reaction conditions, such as incubation time and primer concentration, has little effect on the ability of RPA to function when high concentrations of background DNA are present. Finally, we develop and characterize a lateral flow-based method for enriching the target DNA concentration relative to the background DNA concentration. This sample processing method enables RPA of 10(4) copies of HIV-1 DNA in a background of 0-14 μg of background DNA. Without lateral flow sample enrichment, the maximum amount of background DNA tolerated is 2 μg when 10(6) copies of HIV-1 DNA are present. This method requires no heating or other external equipment, may be integrated with upstream DNA extraction and purification processes, is compatible with the components of lysed blood, and has the potential to detect HIV-1 DNA in infant whole blood with high proviral loads.
Data compression strategies for ptychographic diffraction imaging
NASA Astrophysics Data System (ADS)
Loetgering, Lars; Rose, Max; Treffer, David; Vartanyants, Ivan A.; Rosenhahn, Axel; Wilhein, Thomas
2017-12-01
Ptychography is a computational imaging method for solving inverse scattering problems. To date, the high amount of redundancy present in ptychographic data sets requires computer memory that is orders of magnitude larger than the retrieved information. Here, we propose and compare data compression strategies that significantly reduce the amount of data required for wavefield inversion. Information metrics are used to measure the amount of data redundancy present in ptychographic data. Experimental results demonstrate the technique to be memory efficient and stable in the presence of systematic errors such as partial coherence and noise.
Sorption Isotherm of Southern Yellow Pine-High Density Polyethylene Composites.
Liu, Feihong; Han, Guangping; Cheng, Wanli; Wu, Qinglin
2015-01-20
Temperature and relative humidity (RH) are two major external factors, which affect equilibrium moisture content (EMC) of wood-plastic composites (WPCs). In this study, the effect of different durability treatments on sorption and desorption isotherms of southern yellow pine (SYP)-high density polyethylene (HDPE) composites was investigated. All samples were equilibriumed at 20 °C and various RHs including 16%, 33%, 45%, 66%, 75%, 85%, 93%, and100%. EMCs obtained from desorption and absorption for different WPC samples were compared with Nelson's sorption isotherm model predictions using the same temperature and humidity conditions. The results indicated that the amount of moisture absorbed increased with the increases in RH at 20 °C. All samples showed sorption hysteresis at a fixed RH. Small difference between EMC data of WPC samples containing different amount of ultraviolet (UV) stabilizers were observed. Similar results were observed among the samples containing different amount of zinc borate (ZB). The experimental data of EMCs at various RHs fit to the Nelson's sorption isotherm model well. The Nelson's model can be used to predicate EMCs of WPCs under different RH environmental conditions.
Sorption Isotherm of Southern Yellow Pine—High Density Polyethylene Composites
Liu, Feihong; Han, Guangping; Cheng, Wanli; Wu, Qinglin
2015-01-01
Temperature and relative humidity (RH) are two major external factors, which affect equilibrium moisture content (EMC) of wood-plastic composites (WPCs). In this study, the effect of different durability treatments on sorption and desorption isotherms of southern yellow pine (SYP)-high density polyethylene (HDPE) composites was investigated. All samples were equilibriumed at 20 °C and various RHs including 16%, 33%, 45%, 66%, 75%, 85%, 93%, and100%. EMCs obtained from desorption and absorption for different WPC samples were compared with Nelson’s sorption isotherm model predictions using the same temperature and humidity conditions. The results indicated that the amount of moisture absorbed increased with the increases in RH at 20 °C. All samples showed sorption hysteresis at a fixed RH. Small difference between EMC data of WPC samples containing different amount of ultraviolet (UV) stabilizers were observed. Similar results were observed among the samples containing different amount of zinc borate (ZB). The experimental data of EMCs at various RHs fit to the Nelson’s sorption isotherm model well. The Nelson’s model can be used to predicate EMCs of WPCs under different RH environmental conditions. PMID:28787943
Method for the enzymatic production of hydrogen
Woodward, Jonathan; Mattingly, Susan M.
1999-01-01
The present invention is an enzymatic method for producing hydrogen comprising the steps of: a) forming a reaction mixture within a reaction vessel comprising a substrate capable of undergoing oxidation within a catabolic reaction, such as glucose, galactose, xylose, mannose, sucrose, lactose, cellulose, xylan and starch. The reaction mixture further comprises an amount of glucose dehydrogenase in an amount sufficient to catalyze the oxidation of the substrate, an amount of hydrogenase sufficient to catalyze an electron-requiring reaction wherein a stoichiometric yield of hydrogen is produced, an amount of pH buffer in an amount sufficient to provide an environment that allows the hydrogenase and the glucose dehydrogenase to retain sufficient activity for the production of hydrogen to occur and also comprising an amount of nicotinamide adenine dinucleotide phosphate sufficient to transfer electrons from the catabolic reaction to the electron-requiring reaction; b) heating the reaction mixture at a temperature sufficient for glucose dehydrogenase and the hydrogenase to retain sufficient activity and sufficient for the production of hydrogen to occur, and heating for a period of time that continues until the hydrogen is no longer produced by the reaction mixture, wherein the catabolic reaction and the electron-requiring reactions have rates of reaction dependent upon the temperature; and c) detecting the hydrogen produced from the reaction mixture.
Method for the enzymatic production of hydrogen
Woodward, J.; Mattingly, S.M.
1999-08-24
The present invention is an enzymatic method for producing hydrogen comprising the steps of: (a) forming a reaction mixture within a reaction vessel comprising a substrate capable of undergoing oxidation within a catabolic reaction, such as glucose, galactose, xylose, mannose, sucrose, lactose, cellulose, xylan and starch; the reaction mixture also comprising an amount of glucose dehydrogenase in an amount sufficient to catalyze the oxidation of the substrate, an amount of hydrogenase sufficient to catalyze an electron-requiring reaction wherein a stoichiometric yield of hydrogen is produced, an amount of pH buffer in an amount sufficient to provide an environment that allows the hydrogenase and the glucose dehydrogenase to retain sufficient activity for the production of hydrogen to occur and also comprising an amount of nicotinamide adenine dinucleotide phosphate sufficient to transfer electrons from the catabolic reaction to the electron-requiring reaction; (b) heating the reaction mixture at a temperature sufficient for glucose dehydrogenase and the hydrogenase to retain sufficient activity and sufficient for the production of hydrogen to occur, and heating for a period of time that continues until the hydrogen is no longer produced by the reaction mixture, wherein the catabolic reaction and the electron-requiring reactions have rates of reaction dependent upon the temperature; and (c) detecting the hydrogen produced from the reaction mixture. 8 figs.
Effect of different sampling schemes on the spatial placement of conservation reserves in Utah, USA
Bassett, S.D.; Edwards, T.C.
2003-01-01
We evaluated the effect of three different sampling schemes used to organize spatially explicit biological information had on the spatial placement of conservation reserves in Utah, USA. The three sampling schemes consisted of a hexagon representation developed by the EPA/EMAP program (statistical basis), watershed boundaries (ecological), and the current county boundaries of Utah (socio-political). Four decision criteria were used to estimate effects, including amount of area, length of edge, lowest number of contiguous reserves, and greatest number of terrestrial vertebrate species covered. A fifth evaluation criterion was the effect each sampling scheme had on the ability of the modeled conservation reserves to cover the six major ecoregions found in Utah. Of the three sampling schemes, county boundaries covered the greatest number of species, but also created the longest length of edge and greatest number of reserves. Watersheds maximized species coverage using the least amount of area. Hexagons and watersheds provide the least amount of edge and fewest number of reserves. Although there were differences in area, edge and number of reserves among the sampling schemes, all three schemes covered all the major ecoregions in Utah and their inclusive biodiversity. ?? 2003 Elsevier Science Ltd. All rights reserved.
PRESENCE OF CITRININ IN GRAINS AND ITS POSSIBLE HEALTH EFFECTS.
Čulig, Borna; Bevardi, Martina; Bošnir, Jasna; Serdar, Sonja; Lasić, Dario; Racz, Aleksandar; Galić, Antonija; Kuharić, Željka
2017-01-01
Citrinin is a mycotoxin produced by several species of the genera Aspergillus , Penicillium and Monascus and it occurs mainly in stored grain. Citrinin is generally formed after harvest and occurs mainly in stored grains, it also occurs in other plant products. Often, the co-occurrence with other mycotoxins is observed, especially ochratoxin A, which is usually associated with endemic nephropathy. At the European Union level, systematic monitoring of Citrinin in grains began with the aim of determining its highest permissible amount in food. Thus, far the systematic monitoring of the above mentioned mycotoxin in Croatia is yet to begin. The main goal of this study was to determine the presence of Citrinin in grains sampled in the area of Međimurje, Osijek-Baranja, Vukovar-Srijem and Brod-Posavina County. For the purpose of identification and quantification of citrinin, high performance liquid chromatograph (HPLC) with fluorescence was used (Calibration curve k > 0.999; Intra assay CV = 2.1%; Inter assay CV = 4.3%; LOQ < 1 μg/kg). From the area of Međimurje County, 10 samples of corn and 10 samples of wheat were analyzed. None of the samples contained Citrinin (<1 μg/kg). From the area of Osijek-Baranja and Vukovar-Srijem County, 15 samples from each County were analyzed. The mean value for the samples of Osijek-Baranja County was 19.63 μg/kg (median=15.8 μg/kg), while for Vukovar-Srijem County the mean value of citrinin was 14,6 μg/kg (median=1.23 μg/kg). From 5 analyzed samples from Brod-Posavina County, one of the samples contained citrinin in the amount of 23.8 μg/kg, while the registered amounts in the other samples were <1 μg/kg. The results show that grains from several Counties contain certain amounts of Citrinin possibly indicating a significant intake of citrinin in humans. It must be stated that grains and grain-based products are the basis of everyday diet of all age groups, especially small children, where higher intake of citrinin can occur. Consequently, we emphasize the need for systematic analysis of larger amount of samples, from both large grains and small grains, especially in the area of Brod-Posavina County, in order to obtain more realistic notion of citrinin contamination of grains and to asses the health risk in humans.
NASA Technical Reports Server (NTRS)
Panzarella, Charles
2004-01-01
As humans prepare for the exploration of our solar system, there is a growing need for miniaturized medical and environmental diagnostic devices for use on spacecrafts, especially during long-duration space missions where size and power requirements are critical. In recent years, the biochip (or Lab-on-a-Chip) has emerged as a technology that might be able to satisfy this need. In generic terms, a biochip is a miniaturized microfluidic device analogous to the electronic microchip that ushered in the digital age. It consists of tiny microfluidic channels, pumps and valves that transport small amounts of sample fluids to biosensors that can perform a variety of tests on those fluids in near real time. It has the obvious advantages of being small, lightweight, requiring less sample fluids and reagents and being more sensitive and efficient than larger devices currently in use. Some of the desired space-based applications would be to provide smaller, more robust devices for analyzing blood, saliva and urine and for testing water and food supplies for the presence of harmful contaminants and microorganisms. Our group has undertaken the goal of adapting as well as improving upon current biochip technology for use in long-duration microgravity environments.
Efficient SRAM yield optimization with mixture surrogate modeling
NASA Astrophysics Data System (ADS)
Zhongjian, Jiang; Zuochang, Ye; Yan, Wang
2016-12-01
Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.
Recent Advances in Mycotoxin Determination for Food Monitoring via Microchip
Man, Yan; Liang, Gang; Li, An; Pan, Ligang
2017-01-01
Mycotoxins are one of the main factors impacting food safety. Mycotoxin contamination has threatened the health of humans and animals. Conventional methods for the detection of mycotoxins are gas chromatography (GC) or liquid chromatography (LC) coupled with mass spectrometry (MS), or enzyme-linked immunosorbent assay (ELISA). However, all these methods are time-consuming, require large-scale instruments and skilled technicians, and consume large amounts of hazardous regents and solvents. Interestingly, a microchip requires less sample consumption and short analysis time, and can realize the integration, miniaturization, and high-throughput detection of the samples. Hence, the application of a microchip for the detection of mycotoxins can make up for the deficiency of the conventional detection methods. This review focuses on the application of a microchip to detect mycotoxins in foods. The toxicities of mycotoxins and the materials of the microchip are firstly summarized in turn. Then the application of a microchip that integrates various kinds of detection methods (optical, electrochemical, photo-electrochemical, and label-free detection) to detect mycotoxins is reviewed in detail. Finally, challenges and future research directions in the development of a microchip to detect mycotoxins are previewed. PMID:29036884
Characterization of the Gut Microbiome Using 16S or Shotgun Metagenomics
Jovel, Juan; Patterson, Jordan; Wang, Weiwei; Hotte, Naomi; O'Keefe, Sandra; Mitchel, Troy; Perry, Troy; Kao, Dina; Mason, Andrew L.; Madsen, Karen L.; Wong, Gane K.-S.
2016-01-01
The advent of next generation sequencing (NGS) has enabled investigations of the gut microbiome with unprecedented resolution and throughput. This has stimulated the development of sophisticated bioinformatics tools to analyze the massive amounts of data generated. Researchers therefore need a clear understanding of the key concepts required for the design, execution and interpretation of NGS experiments on microbiomes. We conducted a literature review and used our own data to determine which approaches work best. The two main approaches for analyzing the microbiome, 16S ribosomal RNA (rRNA) gene amplicons and shotgun metagenomics, are illustrated with analyses of libraries designed to highlight their strengths and weaknesses. Several methods for taxonomic classification of bacterial sequences are discussed. We present simulations to assess the number of sequences that are required to perform reliable appraisals of bacterial community structure. To the extent that fluctuations in the diversity of gut bacterial populations correlate with health and disease, we emphasize various techniques for the analysis of bacterial communities within samples (α-diversity) and between samples (β-diversity). Finally, we demonstrate techniques to infer the metabolic capabilities of a bacteria community from these 16S and shotgun data. PMID:27148170
Klopp, R N; Oconitrillo, M J; Sackett, A; Hill, T M; Schlotterbeck, R L; Lascano, G J
2018-07-01
A limited amount of research is available related to the rumen microbiota of calves, yet there has been a recent spike of interest in determining the diversity and development of calf rumen microbial populations. To study the microbial populations of a calf's rumen, a sample of the rumen fluid is needed. One way to take a rumen fluid sample from a calf is by fistulating the animal. This method requires surgery and can be very stressful on a young animal that is trying to adapt to a new environment and has a depressed immune system. Another method that can be used instead of fistulation surgery is a rumen pump. This method requires a tube to be inserted into the rumen through the calf's esophagus. Once inside the rumen, fluid can be pumped out and collected in a few minutes. This method is quick, inexpensive, and does not cause significant stress on the animal. This technical note presents the materials and methodology used to convert a drenching system into a rumen pump and its respective utilization in 2 experiments using dairy bull calves. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Molecular pathology of intraductal papillary mucinous neoplasms of the pancreas
Paini, Marina; Crippa, Stefano; Partelli, Stefano; Scopelliti, Filippo; Tamburrino, Domenico; Baldoni, Andrea; Falconi, Massimo
2014-01-01
Since the first description of intraductal papillary mucinous neoplasms (IPMNs) of the pancreas in the eighties, their identification has dramatically increased in the last decades, hand to hand with the improvements in diagnostic imaging and sampling techniques for the study of pancreatic diseases. However, the heterogeneity of IPMNs and their malignant potential make difficult the management of these lesions. The objective of this review is to identify the molecular characteristics of IPMNs in order to recognize potential markers for the discrimination of more aggressive IPMNs requiring surgical resection from benign IPMNs that could be observed. We briefly summarize recent research findings on the genetics and epigenetics of intraductal papillary mucinous neoplasms, identifying some genes, molecular mechanisms and cellular signaling pathways correlated to the pathogenesis of IPMNs and their progression to malignancy. The knowledge of molecular biology of IPMNs has impressively developed over the last few years. A great amount of genes functioning as oncogenes or tumor suppressor genes have been identified, in pancreatic juice or in blood or in the samples from the pancreatic resections, but further researches are required to use these informations for clinical intent, in order to better define the natural history of these diseases and to improve their management. PMID:25110429
Molecular pathology of intraductal papillary mucinous neoplasms of the pancreas.
Paini, Marina; Crippa, Stefano; Partelli, Stefano; Scopelliti, Filippo; Tamburrino, Domenico; Baldoni, Andrea; Falconi, Massimo
2014-08-07
Since the first description of intraductal papillary mucinous neoplasms (IPMNs) of the pancreas in the eighties, their identification has dramatically increased in the last decades, hand to hand with the improvements in diagnostic imaging and sampling techniques for the study of pancreatic diseases. However, the heterogeneity of IPMNs and their malignant potential make difficult the management of these lesions. The objective of this review is to identify the molecular characteristics of IPMNs in order to recognize potential markers for the discrimination of more aggressive IPMNs requiring surgical resection from benign IPMNs that could be observed. We briefly summarize recent research findings on the genetics and epigenetics of intraductal papillary mucinous neoplasms, identifying some genes, molecular mechanisms and cellular signaling pathways correlated to the pathogenesis of IPMNs and their progression to malignancy. The knowledge of molecular biology of IPMNs has impressively developed over the last few years. A great amount of genes functioning as oncogenes or tumor suppressor genes have been identified, in pancreatic juice or in blood or in the samples from the pancreatic resections, but further researches are required to use these informations for clinical intent, in order to better define the natural history of these diseases and to improve their management.
Joint Sparse Recovery With Semisupervised MUSIC
NASA Astrophysics Data System (ADS)
Wen, Zaidao; Hou, Biao; Jiao, Licheng
2017-05-01
Discrete multiple signal classification (MUSIC) with its low computational cost and mild condition requirement becomes a significant noniterative algorithm for joint sparse recovery (JSR). However, it fails in rank defective problem caused by coherent or limited amount of multiple measurement vectors (MMVs). In this letter, we provide a novel sight to address this problem by interpreting JSR as a binary classification problem with respect to atoms. Meanwhile, MUSIC essentially constructs a supervised classifier based on the labeled MMVs so that its performance will heavily depend on the quality and quantity of these training samples. From this viewpoint, we develop a semisupervised MUSIC (SS-MUSIC) in the spirit of machine learning, which declares that the insufficient supervised information in the training samples can be compensated from those unlabeled atoms. Instead of constructing a classifier in a fully supervised manner, we iteratively refine a semisupervised classifier by exploiting the labeled MMVs and some reliable unlabeled atoms simultaneously. Through this way, the required conditions and iterations can be greatly relaxed and reduced. Numerical experimental results demonstrate that SS-MUSIC can achieve much better recovery performances than other MUSIC extended algorithms as well as some typical greedy algorithms for JSR in terms of iterations and recovery probability.
14 CFR 1300.13 - Guarantee amount.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Aeronautics and Space AIR TRANSPORTATION SYSTEM STABILIZATION OFFICE OF MANAGEMENT AND BUDGET AVIATION DISASTER RELIEF-AIR CARRIER GUARANTEE LOAN PROGRAM Minimum Requirements and Application Procedures § 1300... loan amount guaranteed to a single air carrier may not exceed that amount that, in the Board's sole...
Photochemical Mineralization of Terrigenous DOC to Dissolved Inorganic Carbon in Ocean
NASA Astrophysics Data System (ADS)
Aarnos, Hanna; Gélinas, Yves; Kasurinen, Ville; Gu, Yufei; Puupponen, Veli-Mikko; Vähätalo, Anssi V.
2018-02-01
When terrigenous dissolved organic carbon (tDOC) rich in chromophoric dissolved organic matter (tCDOM) enters the ocean, solar radiation mineralizes it partially into dissolved inorganic carbon (DIC). This study addresses the amount and the rates of DIC photoproduction from tDOC and the area of ocean required to photomineralize tDOC. We collected water samples from 10 major rivers, mixed them with artificial seawater, and irradiated them with simulated solar radiation to measure DIC photoproduction and the photobleaching of tCDOM. The linear relationship between DIC photoproduction and tCDOM photobleaching was used to estimate the amount of photoproduced DIC from the tCDOM fluxes of the study rivers. Solar radiation was estimated to mineralize 12.5 ± 3.7 Tg C yr-1 (10 rivers)-1 or 18 ± 8% of tDOC flux. The irradiation experiments also approximated typical apparent spectral quantum yields for DIC photoproduction (ϕλ) over the entire lifetime of the tCDOM. Based on ϕλs and the local solar irradiances in river plumes, the annual areal DIC photoproduction rates from tDOC were calculated to range from 52 ± 4 (Lena River) to 157 ± 2 mmol C m-2 yr-1 (Mississippi River). When the amount of photoproduced DIC was divided by the areal rate, 9.6 ± 2.5 × 106 km2 of ocean was required for the photomineralization of tDOC from the study rivers. Extrapolation to the global tDOC flux yields 45 (31-58) Tg of photoproduced DIC per year in the river plumes that cover 34 (25-43) × 106 km2 of the ocean.
Automatic differential analysis of NMR experiments in complex samples.
Margueritte, Laure; Markov, Petar; Chiron, Lionel; Starck, Jean-Philippe; Vonthron-Sénécheau, Catherine; Bourjot, Mélanie; Delsuc, Marc-André
2018-06-01
Liquid state nuclear magnetic resonance (NMR) is a powerful tool for the analysis of complex mixtures of unknown molecules. This capacity has been used in many analytical approaches: metabolomics, identification of active compounds in natural extracts, and characterization of species, and such studies require the acquisition of many diverse NMR measurements on series of samples. Although acquisition can easily be performed automatically, the number of NMR experiments involved in these studies increases very rapidly, and this data avalanche requires to resort to automatic processing and analysis. We present here a program that allows the autonomous, unsupervised processing of a large corpus of 1D, 2D, and diffusion-ordered spectroscopy experiments from a series of samples acquired in different conditions. The program provides all the signal processing steps, as well as peak-picking and bucketing of 1D and 2D spectra, the program and its components are fully available. In an experiment mimicking the search of a bioactive species in a natural extract, we use it for the automatic detection of small amounts of artemisinin added to a series of plant extracts and for the generation of the spectral fingerprint of this molecule. This program called Plasmodesma is a novel tool that should be useful to decipher complex mixtures, particularly in the discovery of biologically active natural products from plants extracts but can also in drug discovery or metabolomics studies. Copyright © 2017 John Wiley & Sons, Ltd.
Rugged, Portable, Real-Time Optical Gaseous Analyzer for Hydrogen Fluoride
NASA Technical Reports Server (NTRS)
Pilgrim, Jeffrey; Gonzales, Paula
2012-01-01
Hydrogen fluoride (HF) is a primary evolved combustion product of fluorinated and perfluorinated hydrocarbons. HF is produced during combustion by the presence of impurities and hydrogen- containing polymers including polyimides. This effect is especially dangerous in closed occupied volumes like spacecraft and submarines. In these systems, combinations of perfluorinated hydrocarbons and polyimides are used for insulating wiring. HF is both highly toxic and short-lived in closed environments due to its reactivity. The high reactivity also makes HF sampling problematic. An infrared optical sensor can detect promptly evolving HF with minimal sampling requirements, while providing both high sensitivity and high specificity. A rugged optical path length enhancement architecture enables both high HF sensitivity and rapid environmental sampling with minimal gaseous contact with the low-reactivity sensor surfaces. The inert optical sample cell, combined with infrared semiconductor lasers, is joined with an analog and digital electronic control architecture that allows for ruggedness and compactness. The combination provides both portability and battery operation on a simple camcorder battery for up to eight hours. Optical detection of gaseous HF is confounded by the need for rapid sampling with minimal contact between the sensor and the environmental sample. A sensor is required that must simultaneously provide the required sub-parts-permillion detection limits, but with the high specificity and selectivity expected of optical absorption techniques. It should also be rugged and compact for compatibility with operation onboard spacecraft and submarines. A new optical cell has been developed for which environmental sampling is accomplished by simply traversing the few mm-thick cell walls into an open volume where the measurement is made. A small, low-power fan or vacuum pump may be used to push or pull the gaseous sample into the sample volume for a response time of a few seconds. The optical cell simultaneously provides for an enhanced optical interaction path length between the environmental sample and the infrared laser. Further, the optical cell itself is comprised of inert materials that render it immune to attack by HF. In some cases, the sensor may be configured so that the optoelectronic devices themselves are protected and isolated from HF by the optical cell. The optical sample cell is combined with custom-developed analog and digital control electronics that provide rugged, compact operation on a platform that can run on a camcorder battery. The sensor is inert with respect to acidic gases like HF, while providing the required sensitivity, selectivity, and response time. Certain types of combustion events evolve copious amounts of HF, very little of other gases typically associated with combustion (e.g., carbon monoxide), and very low levels of aerosols and particulates (which confound traditional smoke detectors). The new sensor platform could warn occupants early enough to take the necessary countermeasures.
2012-01-01
Background Multiplex cytometric bead assay (CBA) have a number of advantages over ELISA for antibody testing, but little information is available on standardization and validation of antibody CBA to multiple Plasmodium falciparum antigens. The present study was set to determine optimal parameters for multiplex testing of antibodies to P. falciparum antigens, and to compare results of multiplex CBA to ELISA. Methods Antibodies to ten recombinant P. falciparum antigens were measured by CBA and ELISA in samples from 30 individuals from a malaria endemic area of Kenya and compared to known positive and negative control plasma samples. Optimal antigen amounts, monoplex vs multiplex testing, plasma dilution, optimal buffer, number of beads required were assessed for CBA testing, and results from CBA vs. ELISA testing were compared. Results Optimal amounts for CBA antibody testing differed according to antigen. Results for monoplex CBA testing correlated strongly with multiplex testing for all antigens (r = 0.88-0.99, P values from <0.0001 - 0.004), and antibodies to variants of the same antigen were accurately distinguished within a multiplex reaction. Plasma dilutions of 1:100 or 1:200 were optimal for all antigens for CBA testing. Plasma diluted in a buffer containing 0.05% sodium azide, 0.5% polyvinylalcohol, and 0.8% polyvinylpyrrolidone had the lowest background activity. CBA median fluorescence intensity (MFI) values with 1,000 antigen-conjugated beads/well did not differ significantly from MFI with 5,000 beads/well. CBA and ELISA results correlated well for all antigens except apical membrane antigen-1 (AMA-1). CBA testing produced a greater range of values in samples from malaria endemic areas and less background reactivity for blank samples than ELISA. Conclusion With optimization, CBA may be the preferred method of testing for antibodies to P. falciparum antigens, as CBA can test for antibodies to multiple recombinant antigens from a single plasma sample and produces a greater range of values in positive samples and lower background readings for blank samples than ELISA. PMID:23259607
Method for measuring recovery of catalytic elements from fuel cells
Shore, Lawrence [Edison, NJ; Matlin, Ramail [Berkeley, NJ
2011-03-08
A method is provided for measuring the concentration of a catalytic clement in a fuel cell powder. The method includes depositing on a porous substrate at least one layer of a powder mixture comprising the fuel cell powder and an internal standard material, ablating a sample of the powder mixture using a laser, and vaporizing the sample using an inductively coupled plasma. A normalized concentration of catalytic element in the sample is determined by quantifying the intensity of a first signal correlated to the amount of catalytic element in the sample, quantifying the intensity of a second signal correlated to the amount of internal standard material in the sample, and using a ratio of the first signal intensity to the second signal intensity to cancel out the effects of sample size.
Code of Federal Regulations, 2010 CFR
2010-07-01
... nitrogen oxides (“NOX”) in amounts that will contribute significantly to nonattainment in one or more other... provisions prohibiting sources and other activities from emitting NOX in amounts that will contribute... paragraph (c) of this section, the SIP revision required under paragraph (a) of this section will contain...
Code of Federal Regulations, 2010 CFR
2010-10-01
... functions, including Federal pay costs, Federal employee retirement benefits, automated data processing... 42 Public Health 1 2010-10-01 2010-10-01 false May the Secretary reduce the amount of funds required under Title V to pay for Federal functions, including Federal pay costs, Federal employee...